content
stringlengths 86
88.9k
| title
stringlengths 0
150
| question
stringlengths 1
35.8k
| answers
list | answers_scores
list | non_answers
list | non_answers_scores
list | tags
list | name
stringlengths 30
130
|
---|---|---|---|---|---|---|---|---|
Q:
AWS Python Glue Job Not Importing Numeric Columns into RDS
I have a glue job that takes a csv file from an s3 bucket and imports the data into a postgres rds table. It connects to the db with a jdbc connection. The string/varchar columns are being imported, but the numeric columns are not.
Here is the postgres rds column types:
And here is the python glue script:
def __step_mapping_columns(self):
# Script generated for node S3 bucket
dynamicFrame_dept_summary = self.glueContext.create_dynamic_frame.from_options(
format_options={"quoteChar": '"', "withHeader": True, "separator": ","},
connection_type="s3",
format="csv",
connection_options={
"paths": [
""
],
"recurse": True,
},
transformation_ctx="dynamicFrame_dept_summary",
)
# Script generated for node ApplyMapping
applyMapping_dept_summary = ApplyMapping.apply(
frame=dynamicFrame_dept_summary,
mappings=[("PROCESS_MAIN", "string", "process_main", "string"),
("PROCESS_CORE", "string", "process_core", "string"),
("DC", "string", "dc", "string"),
("BAG_SIZE", "string", "bag_size", "string"),
("EVENT_30_LOC", "string", "start_time_utc", "string"),
("VOLUME", "long", "box_volume", "long"),
("MINUTES", "long", "minutes", "long"),
("PLAN_MINUTES", "long", "plan_minutes", "long"),
("PLAN_RATE", "long", "plan_rate", "long")],
transformation_ctx="applyMapping_dept_summary",
)
logger.info(mappings)
return applyMapping_dept_summary
Does anyone know what the issue might be?
A:
Figured it out. I needed to typecast those columns to the long type first because the Dynamic frame is unsure about the data type.
dynamicFrame_dept_summary = dynamicFrame_dept_summary.resolveChoice( specs =[('VOLUME','cast:long')]).resolveChoice( specs = [('MINUTES','cast:long')]).resolveChoice( specs = [('PLAN_MINUTES','cast:long')]).resolveChoice( specs = [('PLAN_RATE','cast:long')])
|
AWS Python Glue Job Not Importing Numeric Columns into RDS
|
I have a glue job that takes a csv file from an s3 bucket and imports the data into a postgres rds table. It connects to the db with a jdbc connection. The string/varchar columns are being imported, but the numeric columns are not.
Here is the postgres rds column types:
And here is the python glue script:
def __step_mapping_columns(self):
# Script generated for node S3 bucket
dynamicFrame_dept_summary = self.glueContext.create_dynamic_frame.from_options(
format_options={"quoteChar": '"', "withHeader": True, "separator": ","},
connection_type="s3",
format="csv",
connection_options={
"paths": [
""
],
"recurse": True,
},
transformation_ctx="dynamicFrame_dept_summary",
)
# Script generated for node ApplyMapping
applyMapping_dept_summary = ApplyMapping.apply(
frame=dynamicFrame_dept_summary,
mappings=[("PROCESS_MAIN", "string", "process_main", "string"),
("PROCESS_CORE", "string", "process_core", "string"),
("DC", "string", "dc", "string"),
("BAG_SIZE", "string", "bag_size", "string"),
("EVENT_30_LOC", "string", "start_time_utc", "string"),
("VOLUME", "long", "box_volume", "long"),
("MINUTES", "long", "minutes", "long"),
("PLAN_MINUTES", "long", "plan_minutes", "long"),
("PLAN_RATE", "long", "plan_rate", "long")],
transformation_ctx="applyMapping_dept_summary",
)
logger.info(mappings)
return applyMapping_dept_summary
Does anyone know what the issue might be?
|
[
"Figured it out. I needed to typecast those columns to the long type first because the Dynamic frame is unsure about the data type.\ndynamicFrame_dept_summary = dynamicFrame_dept_summary.resolveChoice( specs =[('VOLUME','cast:long')]).resolveChoice( specs = [('MINUTES','cast:long')]).resolveChoice( specs = [('PLAN_MINUTES','cast:long')]).resolveChoice( specs = [('PLAN_RATE','cast:long')])\n"
] |
[
0
] |
[] |
[] |
[
"amazon_s3",
"amazon_web_services",
"aws_glue",
"postgresql",
"python"
] |
stackoverflow_0074659315_amazon_s3_amazon_web_services_aws_glue_postgresql_python.txt
|
Q:
How does ascii hexbin become 4 bit?
https://biorobotics.fi-p.unam.mx/wp-content/uploads/Courses/arquitectura_de_computadoras/material_de_apoyo/68HC11.pdf
In Buffalo Entry Points in HC11, there are these lines
I couldn't figure it out. Ascii hex are 7-bit with 127 numbers. How can it change into 4-bit?? What is it exactly doing?
A:
Just for convenience a verbatim copy from the linked PDF:
.HEXBIN EQU $FF85 ; = JMP HEXBIN
; Convert ascii hex char in A to 4-bit binary number.
; Shift binary number into SHFTREG from the right.
; SHFTREG is a 2-byte (4 hex digits) buffer. If A
; register did not contain a hex character, location
; TMP1 is incremented and SHFTREG is unchanged.
I sorted your (sub-)questions in some subjective logical order...
Ascii hex are 7-bit with 127 numbers.
Well, no. ASCII defines a 7-bit code for non-printable and printable characters, that's right. You can write these code in hex, but this is not meant here. You can choose any number base, not only hex, to write the codes.
What the author means are the characters we use for hex digits, and which are coded in ASCII, hence "ASCII hex char". These are '0' to '9' coded as $30 to $39, 'A' to 'F' coded as $41 to $46, and commonly also 'a' to 'f' coded as $61 to $66.
A character always contains hex character from 00 to FF. how could register A not contain a hex character?
As you already said before, ASCII characters are only defined as 7-bit values. They range from $00 to $7F. However, there are extended ASCII character sets, but they differ at many codes.
Not all 8-bit values are ASCII codes for hex characters, only those I listed above. But you can express all 8-bit values by 2 hex characters. For these 2 hex characters you need 14 bits, commonly stored as 2 8-bit values.
How can it change into 4-bit??
As the input to the routine is just one ASCII character, provided in register A, and interpreted as hex digit, the result is just the 4-bit value of the hex digit input character. Or none, if the character is no hex digit.
If what you described was not based on my deleted example, but direct knowledge. What register did it put the binary number?
As the comment says, the result is shifted from the right into the RAM location SHFTREG.
what is SHFTREG, what address.. because it is not register A.
(last lines in orignal message deleted because I didn't use # but only % in ldaa. when I used #, it returns 00 to ff for original register A of 00 to ff. So it must be keeping the converted 4 bit binary elsewhere, where?
The return value of register A is not documented. You are right, SHFTREG is not the register A. Since the linked PDF lacks the definition of SHFTREG, I cannot say for sure. It is a 2-byte location most probably in RAM. Chapter 2.7 documents in table 2 the range of $0033 to $00FF as used for Buffalo. I'd assume it is somewhere there.
Note: This is the deleted example:
I tried running FF85 in HC11. If A=04, it remains A=04 after running Ldaa #04, jmp $FF85. But if the ldaa is #65 for example, jmp FF85 will produce A=9A. How?
The prefix # marks a value as "immediate". Since you said above that you used % instead of #, I wonder that the assembler did not throw an error. The prefix % marks a binary literal, but you gave it non-binary digits. Without the # you wrote an instruction that loads A from a RAM location, not with a literal. Anyway, it is not really relevant, since you looked at the value of A after returning. But that is not documented and therefore tells us not much.
What is it exactly doing?
This routine expects an ASCII coded character as a hex digit and decodes the value of the hex digit. The result is shifted from the right into SHFTREG, if the character is a hex digit. Else it does not change SHFTREG, but increments another RAM location labeled as TMP1.
Input in A
4-bit value shifted into SHFTREG
Action on TMP1
'0' = $30 = 48
$0
none
:
:
:
'9' = $39 = 57
$9
none
'A' = $41 = 65
$A
none
:
:
:
'F' = $46 = 70
$F
none
'a' = $61 = 97
$A
none
:
:
:
'f' = $66 = 102
$F
none
any other
none
increment
|
How does ascii hexbin become 4 bit?
|
https://biorobotics.fi-p.unam.mx/wp-content/uploads/Courses/arquitectura_de_computadoras/material_de_apoyo/68HC11.pdf
In Buffalo Entry Points in HC11, there are these lines
I couldn't figure it out. Ascii hex are 7-bit with 127 numbers. How can it change into 4-bit?? What is it exactly doing?
|
[
"Just for convenience a verbatim copy from the linked PDF:\n.HEXBIN EQU $FF85 ; = JMP HEXBIN\n ; Convert ascii hex char in A to 4-bit binary number.\n ; Shift binary number into SHFTREG from the right.\n ; SHFTREG is a 2-byte (4 hex digits) buffer. If A\n ; register did not contain a hex character, location\n ; TMP1 is incremented and SHFTREG is unchanged.\n\nI sorted your (sub-)questions in some subjective logical order...\n\nAscii hex are 7-bit with 127 numbers.\n\nWell, no. ASCII defines a 7-bit code for non-printable and printable characters, that's right. You can write these code in hex, but this is not meant here. You can choose any number base, not only hex, to write the codes.\nWhat the author means are the characters we use for hex digits, and which are coded in ASCII, hence \"ASCII hex char\". These are '0' to '9' coded as $30 to $39, 'A' to 'F' coded as $41 to $46, and commonly also 'a' to 'f' coded as $61 to $66.\n\nA character always contains hex character from 00 to FF. how could register A not contain a hex character?\n\nAs you already said before, ASCII characters are only defined as 7-bit values. They range from $00 to $7F. However, there are extended ASCII character sets, but they differ at many codes.\nNot all 8-bit values are ASCII codes for hex characters, only those I listed above. But you can express all 8-bit values by 2 hex characters. For these 2 hex characters you need 14 bits, commonly stored as 2 8-bit values.\n\nHow can it change into 4-bit??\n\nAs the input to the routine is just one ASCII character, provided in register A, and interpreted as hex digit, the result is just the 4-bit value of the hex digit input character. Or none, if the character is no hex digit.\n\nIf what you described was not based on my deleted example, but direct knowledge. What register did it put the binary number?\n\nAs the comment says, the result is shifted from the right into the RAM location SHFTREG.\n\nwhat is SHFTREG, what address.. because it is not register A.\n\n\n(last lines in orignal message deleted because I didn't use # but only % in ldaa. when I used #, it returns 00 to ff for original register A of 00 to ff. So it must be keeping the converted 4 bit binary elsewhere, where?\n\nThe return value of register A is not documented. You are right, SHFTREG is not the register A. Since the linked PDF lacks the definition of SHFTREG, I cannot say for sure. It is a 2-byte location most probably in RAM. Chapter 2.7 documents in table 2 the range of $0033 to $00FF as used for Buffalo. I'd assume it is somewhere there.\nNote: This is the deleted example:\n\nI tried running FF85 in HC11. If A=04, it remains A=04 after running Ldaa #04, jmp $FF85. But if the ldaa is #65 for example, jmp FF85 will produce A=9A. How?\n\nThe prefix # marks a value as \"immediate\". Since you said above that you used % instead of #, I wonder that the assembler did not throw an error. The prefix % marks a binary literal, but you gave it non-binary digits. Without the # you wrote an instruction that loads A from a RAM location, not with a literal. Anyway, it is not really relevant, since you looked at the value of A after returning. But that is not documented and therefore tells us not much.\n\nWhat is it exactly doing?\n\nThis routine expects an ASCII coded character as a hex digit and decodes the value of the hex digit. The result is shifted from the right into SHFTREG, if the character is a hex digit. Else it does not change SHFTREG, but increments another RAM location labeled as TMP1.\n\n\n\n\nInput in A\n4-bit value shifted into SHFTREG\nAction on TMP1\n\n\n\n\n'0' = $30 = 48\n$0\nnone\n\n\n:\n:\n:\n\n\n'9' = $39 = 57\n$9\nnone\n\n\n'A' = $41 = 65\n$A\nnone\n\n\n:\n:\n:\n\n\n'F' = $46 = 70\n$F\nnone\n\n\n'a' = $61 = 97\n$A\nnone\n\n\n:\n:\n:\n\n\n'f' = $66 = 102\n$F\nnone\n\n\nany other\nnone\nincrement\n\n\n\n"
] |
[
1
] |
[] |
[] |
[
"ascii"
] |
stackoverflow_0074401337_ascii.txt
|
Q:
Telethon New Message Event Handler waits minute
Telethon event handler waits 1 minute before sending out a burst of messages at the same time.
I tried removing functions from other souces as I thought that could be it and it did not work.
code:
`
from telethon import TelegramClient, events
import logging
import time
#from main import add
logging.basicConfig(format='[%(levelname) 5s/%(asctime)s] %(name)s: %(message)s', level=logging.WARNING)
api_id =
api_hash =
client = TelegramClient('anon', api_id, api_hash)
@client.on(events.NewMessage)
async def my_event_handler(event):
print(event.raw_text)
#add(event.raw_text)
client.start()
client.run_until_disconnected()
`
A:
Try uninstall and reinstall Telethon again
I also Can't login!
A:
form me work ok and i have testet
python need to run all time
#exit()
import sys
from telethon import TelegramClient, events
import logging
import time
import telethon.tl.functions as _fn
logging.basicConfig(format='[%(levelname) 5s/%(asctime)s] %(name)s: %(message)s', level=logging.WARNING)
api_id = row[2]
api_hash = row[3]
client = TelegramClient(path+str(api_id), api_id, api_hash)
@client.on(events.NewMessage)
async def my_event_handler(event):
print(event.raw_text)
print(event)
#add(event.raw_text)
client.start()
client.run_until_disconnected()
print('Finish...')
|
Telethon New Message Event Handler waits minute
|
Telethon event handler waits 1 minute before sending out a burst of messages at the same time.
I tried removing functions from other souces as I thought that could be it and it did not work.
code:
`
from telethon import TelegramClient, events
import logging
import time
#from main import add
logging.basicConfig(format='[%(levelname) 5s/%(asctime)s] %(name)s: %(message)s', level=logging.WARNING)
api_id =
api_hash =
client = TelegramClient('anon', api_id, api_hash)
@client.on(events.NewMessage)
async def my_event_handler(event):
print(event.raw_text)
#add(event.raw_text)
client.start()
client.run_until_disconnected()
`
|
[
"Try uninstall and reinstall Telethon again\nI also Can't login!\n",
"form me work ok and i have testet\npython need to run all time\n\n\n#exit()\nimport sys\nfrom telethon import TelegramClient, events\nimport logging\nimport time\nimport telethon.tl.functions as _fn \n\n\nlogging.basicConfig(format='[%(levelname) 5s/%(asctime)s] %(name)s: %(message)s', level=logging.WARNING)\n\napi_id = row[2]\napi_hash = row[3] \nclient = TelegramClient(path+str(api_id), api_id, api_hash)\n\[email protected](events.NewMessage)\n\nasync def my_event_handler(event):\n print(event.raw_text)\n print(event)\n #add(event.raw_text)\n\nclient.start()\nclient.run_until_disconnected()\n\n\nprint('Finish...')\n\n\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"python",
"telegram",
"telethon"
] |
stackoverflow_0074257571_python_telegram_telethon.txt
|
Q:
Background modal at 100% not filling the screen
I'm trying to create a background modal that's supposed to fill the entire height of the page.
The modal only fills about half the page, around 950px (which is the 100% viewable portion).
Tried to change the units, tried using calc, used wrapping components.
P.S. When the modal is called by JS the display changes from none to block
#modalBackground {
display: none;
position: absolute;
top: 0;
left: 0;
width: 100%;
height: 100%;
background-color: rgb(0 0 0 / 0.7);
z-index: 10;
}
<body>
...stuff
<div id="modalBackground"></div>
...stuff
</body>
A:
Have you tried using position fixed instead of absolute?
If you want modal to take the whole screen. I think position fixed makes more sense.
A:
With position: absolute you position an element relative to its closest positioned ancestor. We can't see the rest of your code but you probably have your modal inside another element that is positioned and so it can only fill that element's area.
To get the behaviour you want, position: fixed is more appropriate. This positions the element relative to the viewport:
#modalBackground {
position: fixed;
top: 0;
left: 0;
width: 100%;
height: 100%;
background-color: rgb(0 0 0 / 0.7);
z-index: 10;
}
<body>
...stuff
<div id="modalBackground"></div>
...stuff
</body>
A:
Try to make the container as a default position, which is static. Then set the max-width to 100vw as it will expand the container to the width of the screen, while the image itself is set to 100% which will respect the width of the screen as well. Additionally, I find setting aspect ratio a good practice, especially if your dynamic site accepts different images with different aspect ratio.
#modalBackground {
max-width: 100vw;
aspect-ratio: 16/9 (Your prefered size);
object-fit: cover;
object-position: center;
background-color: rgb(0 0 0 / 0.7);
z-index: 10;
}
Hope this helps!
|
Background modal at 100% not filling the screen
|
I'm trying to create a background modal that's supposed to fill the entire height of the page.
The modal only fills about half the page, around 950px (which is the 100% viewable portion).
Tried to change the units, tried using calc, used wrapping components.
P.S. When the modal is called by JS the display changes from none to block
#modalBackground {
display: none;
position: absolute;
top: 0;
left: 0;
width: 100%;
height: 100%;
background-color: rgb(0 0 0 / 0.7);
z-index: 10;
}
<body>
...stuff
<div id="modalBackground"></div>
...stuff
</body>
|
[
"Have you tried using position fixed instead of absolute?\nIf you want modal to take the whole screen. I think position fixed makes more sense.\n",
"With position: absolute you position an element relative to its closest positioned ancestor. We can't see the rest of your code but you probably have your modal inside another element that is positioned and so it can only fill that element's area.\nTo get the behaviour you want, position: fixed is more appropriate. This positions the element relative to the viewport:\n\n\n#modalBackground {\n position: fixed;\n top: 0;\n left: 0;\n width: 100%;\n height: 100%;\n background-color: rgb(0 0 0 / 0.7);\n z-index: 10;\n}\n<body>\n ...stuff\n <div id=\"modalBackground\"></div>\n ...stuff\n</body>\n\n\n\n",
"Try to make the container as a default position, which is static. Then set the max-width to 100vw as it will expand the container to the width of the screen, while the image itself is set to 100% which will respect the width of the screen as well. Additionally, I find setting aspect ratio a good practice, especially if your dynamic site accepts different images with different aspect ratio.\n#modalBackground {\n max-width: 100vw;\n aspect-ratio: 16/9 (Your prefered size);\n object-fit: cover;\n object-position: center;\n background-color: rgb(0 0 0 / 0.7);\n z-index: 10;\n }\n\nHope this helps!\n"
] |
[
1,
1,
0
] |
[] |
[] |
[
"css",
"html",
"modal_dialog",
"sass"
] |
stackoverflow_0074661628_css_html_modal_dialog_sass.txt
|
Q:
flutter dropdown items from list - no instance method
i try to fill dropdown items. I am getting a json string from api and i save it with GetStorage.
I save like this.
box.write('itemsFromApi', listFromApi[0].myItems
and my string like below;
[{aaa: 2672366, bbb: 11312074, ccc: 1},
{aaa: 2672553, bbb: 11312015, ccc: 1}]
In my homepage
List _listx = [];
@override
void initState() {
_listx = GetStorage().read('itemsFromApi');
}
And i try to fill dropdown items like below;
I am getting error NoSuchMethodError (NoSuchMethodError: Class 'int' has no instance method 'where'.
items: _listx[0]["ccc"]
.where((p0) => p0.ccc== 1)
.map(
(item) => DropdownMenuItem<String>(
A:
you are trying to call where on integer _listx[0]["ccc"] you probably want to do _listx.where((p0) => p0.ccc== 1)
|
flutter dropdown items from list - no instance method
|
i try to fill dropdown items. I am getting a json string from api and i save it with GetStorage.
I save like this.
box.write('itemsFromApi', listFromApi[0].myItems
and my string like below;
[{aaa: 2672366, bbb: 11312074, ccc: 1},
{aaa: 2672553, bbb: 11312015, ccc: 1}]
In my homepage
List _listx = [];
@override
void initState() {
_listx = GetStorage().read('itemsFromApi');
}
And i try to fill dropdown items like below;
I am getting error NoSuchMethodError (NoSuchMethodError: Class 'int' has no instance method 'where'.
items: _listx[0]["ccc"]
.where((p0) => p0.ccc== 1)
.map(
(item) => DropdownMenuItem<String>(
|
[
"you are trying to call where on integer _listx[0][\"ccc\"] you probably want to do _listx.where((p0) => p0.ccc== 1)\n"
] |
[
0
] |
[] |
[] |
[
"dropdown",
"flutter",
"flutter_getx"
] |
stackoverflow_0074661706_dropdown_flutter_flutter_getx.txt
|
Q:
Linking multiple lists with a variable
I'm trying to link multiple lists with a variable. With the output being an item from one of the 'multiple' lists. The variable needs to have the name of the list. So that the index of the item in the one list is the same as the index of the item in one of the others. Sorry if it's a duplicate, but I couln't find anything that I can understand.
creature_type = "easy"
creature_type = "medium"
creature_type = "hard"
list1 = ['slime', 'dog', 'chicken']
list2 = ['orc', 'wolf']
list3 = ['dragon', 'golem', 'vampire']
attack4 = ['spits juice', 'bites', 'pecks']
attack5 = ['slams', 'howls']
attack6 = ['breaths fire', 'throws rocks', 'transforms']
With as output the attack for the creature in the first three lists.
A:
Use dictionaries and zip:
creatures = {'easy': ['slime', 'dog', 'chicken'],
'medium': ['orc', 'wolf'],
'hard': ['dragon', 'golem', 'vampire']}
attacks = {'easy': ['spits juice', 'bites', 'pecks'],
'medium': ['slams', 'howls'],
'hard': ['breaths fire', 'throws rocks', 'transforms']}
choice = 'medium'
linked = list(zip(creatures[choice], attacks[choice]))
Output:
[('orc', 'slams'), ('wolf', 'howls')]
|
Linking multiple lists with a variable
|
I'm trying to link multiple lists with a variable. With the output being an item from one of the 'multiple' lists. The variable needs to have the name of the list. So that the index of the item in the one list is the same as the index of the item in one of the others. Sorry if it's a duplicate, but I couln't find anything that I can understand.
creature_type = "easy"
creature_type = "medium"
creature_type = "hard"
list1 = ['slime', 'dog', 'chicken']
list2 = ['orc', 'wolf']
list3 = ['dragon', 'golem', 'vampire']
attack4 = ['spits juice', 'bites', 'pecks']
attack5 = ['slams', 'howls']
attack6 = ['breaths fire', 'throws rocks', 'transforms']
With as output the attack for the creature in the first three lists.
|
[
"Use dictionaries and zip:\ncreatures = {'easy': ['slime', 'dog', 'chicken'],\n 'medium': ['orc', 'wolf'],\n 'hard': ['dragon', 'golem', 'vampire']}\n\nattacks = {'easy': ['spits juice', 'bites', 'pecks'],\n 'medium': ['slams', 'howls'],\n 'hard': ['breaths fire', 'throws rocks', 'transforms']}\n\nchoice = 'medium'\n\nlinked = list(zip(creatures[choice], attacks[choice]))\n\nOutput:\n[('orc', 'slams'), ('wolf', 'howls')]\n\n"
] |
[
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074661760_python.txt
|
Q:
How to concatenate 2 video files using FFmpeg?
I have two video files intro.mp4 and video.mp4, and I want it so that intro.mp4 is attached to video.mp4
I tried using the solution given here by doing this ffmpeg -i intro.mp4 -i new.mp4 -filter_complex "[0:v] [0:a] [1:v] [1:a] concat=n=2:v=1:a=1 [v] [a]" -map "[v]" -map "[a]" outputt.mp4 but I got this error
[Parsed_concat_0 @ 0x555ded10a180] Input link in0:v0 parameters (size 1920x1080, SAR 1:1) do not match the corresponding output link in0:v0 parameters (567x400, SAR 1:1)
[Parsed_concat_0 @ 0x555ded10a180] Failed to configure output pad on Parsed_concat_0
Error reinitializing filters!
Failed to inject frame into filter network: Invalid argument
Error while processing the decoded data for stream #1:0
A:
For CMD:
ffmpeg -i "concat:intro.mp4|video.mp4" -codec copy output.mp4
For batch (You need to put your input files inside "inputs" folder, in the same directory as your FFmpeg. Your output files would be inside "outputs" folder.):
@echo off
mkdir inputs
mkdir outputs
:Begin
echo.
set /P iformat=Insert the format of the input files:
echo.
set /P input1=Insert the name of the first input:
echo.
set /P input2=Insert the name of the second input:
echo.
set /P output=Insert the name and format of the output (e.g. output.mp4):
for %%i in (inputs/*%iformat%) do (
echo file 'inputs/%input1%.%iformat%' > concatlist.txt
echo file 'inputs/%input2%.%iformat%' >> concatlist.txt
ffmpeg -y -f concat -i concatlist.txt -c copy outputs\%output%
)
del concatlist.txt
|
How to concatenate 2 video files using FFmpeg?
|
I have two video files intro.mp4 and video.mp4, and I want it so that intro.mp4 is attached to video.mp4
I tried using the solution given here by doing this ffmpeg -i intro.mp4 -i new.mp4 -filter_complex "[0:v] [0:a] [1:v] [1:a] concat=n=2:v=1:a=1 [v] [a]" -map "[v]" -map "[a]" outputt.mp4 but I got this error
[Parsed_concat_0 @ 0x555ded10a180] Input link in0:v0 parameters (size 1920x1080, SAR 1:1) do not match the corresponding output link in0:v0 parameters (567x400, SAR 1:1)
[Parsed_concat_0 @ 0x555ded10a180] Failed to configure output pad on Parsed_concat_0
Error reinitializing filters!
Failed to inject frame into filter network: Invalid argument
Error while processing the decoded data for stream #1:0
|
[
"For CMD:\nffmpeg -i \"concat:intro.mp4|video.mp4\" -codec copy output.mp4\n\nFor batch (You need to put your input files inside \"inputs\" folder, in the same directory as your FFmpeg. Your output files would be inside \"outputs\" folder.):\n@echo off\nmkdir inputs\nmkdir outputs\n\n:Begin\n\necho.\nset /P iformat=Insert the format of the input files: \n\necho.\nset /P input1=Insert the name of the first input: \n\necho.\nset /P input2=Insert the name of the second input: \n\necho.\nset /P output=Insert the name and format of the output (e.g. output.mp4): \n\nfor %%i in (inputs/*%iformat%) do (\n echo file 'inputs/%input1%.%iformat%' > concatlist.txt\n echo file 'inputs/%input2%.%iformat%' >> concatlist.txt\n ffmpeg -y -f concat -i concatlist.txt -c copy outputs\\%output%\n)\n\ndel concatlist.txt\n\n"
] |
[
0
] |
[] |
[] |
[
"concatenation",
"ffmpeg",
"video"
] |
stackoverflow_0074591787_concatenation_ffmpeg_video.txt
|
Q:
What is the fastest way to check if a substring is in a string as an entire word or term, like RegEx with boundaries?
I have a problem to find the fastest way to check if a substring is in a string as an entire word or term. Currently, I'm using RegEx, but I need to perform thousands of verifications and RegEx is being VERY slow.
There are many ways to respond to this. The easier way to verify is substring in string:
substring = "programming"
string = "Python is a high-level programming language"
substring in string
>>> True
In other hand, it's a naivy solution when we need to find the substring as an entire word or term:
substring = "program"
string = "Python is a high-level programming language"
substring in string
>>> True
Another solution is to split the string into a list of words and verify if the substring is in that list:
substring = "program"
string = "Python is a high-level programming language"
substring in string.split()
>>> False
Nevertheless, it doesn't work if the substring is a term. To resolve this, another solution would be to use RegEx:
import re
substring = "high-level program"
string = "Python is a high-level programming language"
re.search(r"\b{}\b".format(substring), string) != None
>>> False
However, my biggest problem is that the solution is REALLY slow if you need to perform thousands of verifications.
To mitigate this issue, I created some approaches that, although they are faster than RegEx (for the use I need), still are a lot slower than substring in string:
substring = "high-level program"
string = "Python is a high-level programming language"
all([word in string.split() for word in substring.split()])
>>> False
Although simple, the above approach didn't fit because it ignores substring word order, returning True if the substring was "programming high-level", unlike the solution in RegEx. So, I created another approach verifying if the substring is in a ngram list where each ngram has the same number of words as the substring:
from nltk import ngrams
substring = "high-level program"
string = "Python is a high-level programming language"
ngram = list(ngrams(string.split(), len(substring.split())))
substring in [" ".join(tuples) for tuples in ngram]
>>> False
EDIT: Here is a less slow version, working with the same principle, but using only built-in functions:
substring = "high-level program"
string = "Python is a high-level programming language"
length = len(substring.split())
words = string.split()
ngrams = [" ".join(words[i:i+length]) for i in range(len(words) - length)]
substring in ngrams
>>> False
Someone knows some a faster approach to find a substring inside a string as an entire word or term?
A:
Simply loop through the string and splice the string according to the substring length and compare the splice string with the substring if it is equal return True.
Illustration*
strs = "Coding"
substr = "ding"
slen = 4
i = 0
check = strs[i:slen+i]==substr
# 1st iteration
strs[0:4+0] == ding
codi == ding # False
# 2nd iteration
i=1
strs[1:4+1] == ding
odin == ding # False
# 3rd iteration
i=2
strs [2:4+2] == ding
ding == ding # True
Solution
def str_exist(string, substring, slen):
for i in range(len(string)):
if string[i:slen+i] == substring:
return True
return False
substring = "high-level program"
string = "Python is a high-level programming language"
slen = len(substring)
print(str_exist(string, substring, slen))
OUTPUT
True
A:
Check this out. I've added comments in my code for better understanding of what this algorithm is doing.
def check_substr(S: str, sub_str: str) -> bool:
"""
This function tells whether the given sub-string
in a string is present or not.
Parameters
S: str: The original string
sub_str: str: The sub-string to be checked
Returns
result: boolean: Whether the string is present or not
"""
i = 0
pointer = 0
while (i < len(S)):
# This means that we are already in that word
# whose sub-part is already matched. For eg:
# `program` in `programming`. Therefore we are
# going to skip the rest of the word and check
# the next word instead.
if (S[i] != ' ' and pointer == len(sub_str)):
while (i < len(S) and S[i] != ' '):
i += 1
i += 1
pointer = 0
if (i >= len(S)):
break
# If we encounter a space, we check whether we
# have already found the sub-string or not.
elif (S[i] == ' ' and pointer == len(sub_str)):
break
if (S[i] == sub_str[pointer]):
pointer += 1
else:
# If the current element of the original
# string matched with the first element of
# the sub-string then we increment the
# pointer by 1. Otherwise we set it to 0.
pointer = 1 if (S[i] == sub_str[0]) else 0
i += 1
return pointer == len(sub_str)
S = "Python is a high-level programming"
print(check_substr(S, "high-level program"))
print(check_substr(S, "programming language"))
Output
False
False
Time Complexity
O(n)
Edits:
As @PGHE pointed out in the comments, we can also do the checking in punctuation characters and not only in spaces. Since the OP hasn't mentioned anything about the punctuation, I'm keeping this answer as it is.
A:
Add spaces on both sides of the substring and string, then test 'substring in string'
|
What is the fastest way to check if a substring is in a string as an entire word or term, like RegEx with boundaries?
|
I have a problem to find the fastest way to check if a substring is in a string as an entire word or term. Currently, I'm using RegEx, but I need to perform thousands of verifications and RegEx is being VERY slow.
There are many ways to respond to this. The easier way to verify is substring in string:
substring = "programming"
string = "Python is a high-level programming language"
substring in string
>>> True
In other hand, it's a naivy solution when we need to find the substring as an entire word or term:
substring = "program"
string = "Python is a high-level programming language"
substring in string
>>> True
Another solution is to split the string into a list of words and verify if the substring is in that list:
substring = "program"
string = "Python is a high-level programming language"
substring in string.split()
>>> False
Nevertheless, it doesn't work if the substring is a term. To resolve this, another solution would be to use RegEx:
import re
substring = "high-level program"
string = "Python is a high-level programming language"
re.search(r"\b{}\b".format(substring), string) != None
>>> False
However, my biggest problem is that the solution is REALLY slow if you need to perform thousands of verifications.
To mitigate this issue, I created some approaches that, although they are faster than RegEx (for the use I need), still are a lot slower than substring in string:
substring = "high-level program"
string = "Python is a high-level programming language"
all([word in string.split() for word in substring.split()])
>>> False
Although simple, the above approach didn't fit because it ignores substring word order, returning True if the substring was "programming high-level", unlike the solution in RegEx. So, I created another approach verifying if the substring is in a ngram list where each ngram has the same number of words as the substring:
from nltk import ngrams
substring = "high-level program"
string = "Python is a high-level programming language"
ngram = list(ngrams(string.split(), len(substring.split())))
substring in [" ".join(tuples) for tuples in ngram]
>>> False
EDIT: Here is a less slow version, working with the same principle, but using only built-in functions:
substring = "high-level program"
string = "Python is a high-level programming language"
length = len(substring.split())
words = string.split()
ngrams = [" ".join(words[i:i+length]) for i in range(len(words) - length)]
substring in ngrams
>>> False
Someone knows some a faster approach to find a substring inside a string as an entire word or term?
|
[
"Simply loop through the string and splice the string according to the substring length and compare the splice string with the substring if it is equal return True.\nIllustration*\nstrs = \"Coding\"\nsubstr = \"ding\"\nslen = 4\ni = 0\n\ncheck = strs[i:slen+i]==substr\n\n# 1st iteration\nstrs[0:4+0] == ding\ncodi == ding # False\n\n# 2nd iteration\ni=1\nstrs[1:4+1] == ding\nodin == ding # False\n\n# 3rd iteration\ni=2\nstrs [2:4+2] == ding\nding == ding # True\n\n\nSolution\ndef str_exist(string, substring, slen):\n for i in range(len(string)):\n if string[i:slen+i] == substring:\n return True\n return False\n\nsubstring = \"high-level program\"\nstring = \"Python is a high-level programming language\"\nslen = len(substring)\n\nprint(str_exist(string, substring, slen))\n\n\nOUTPUT\nTrue\n\n",
"Check this out. I've added comments in my code for better understanding of what this algorithm is doing.\ndef check_substr(S: str, sub_str: str) -> bool:\n \"\"\"\n This function tells whether the given sub-string \n in a string is present or not.\n \n Parameters\n S: str: The original string\n sub_str: str: The sub-string to be checked\n \n Returns\n result: boolean: Whether the string is present or not\n \"\"\"\n i = 0\n pointer = 0\n \n while (i < len(S)):\n # This means that we are already in that word\n # whose sub-part is already matched. For eg:\n # `program` in `programming`. Therefore we are\n # going to skip the rest of the word and check\n # the next word instead.\n if (S[i] != ' ' and pointer == len(sub_str)):\n while (i < len(S) and S[i] != ' '):\n i += 1\n i += 1\n pointer = 0\n \n if (i >= len(S)):\n break\n \n # If we encounter a space, we check whether we\n # have already found the sub-string or not.\n elif (S[i] == ' ' and pointer == len(sub_str)):\n break\n \n if (S[i] == sub_str[pointer]):\n pointer += 1\n \n else:\n # If the current element of the original \n # string matched with the first element of\n # the sub-string then we increment the \n # pointer by 1. Otherwise we set it to 0.\n pointer = 1 if (S[i] == sub_str[0]) else 0\n \n i += 1\n \n return pointer == len(sub_str)\n \nS = \"Python is a high-level programming\"\nprint(check_substr(S, \"high-level program\"))\nprint(check_substr(S, \"programming language\"))\n\nOutput\nFalse\nFalse\n\nTime Complexity\nO(n)\n\nEdits:\nAs @PGHE pointed out in the comments, we can also do the checking in punctuation characters and not only in spaces. Since the OP hasn't mentioned anything about the punctuation, I'm keeping this answer as it is.\n",
"Add spaces on both sides of the substring and string, then test 'substring in string'\n"
] |
[
1,
1,
1
] |
[] |
[] |
[
"contains",
"python",
"regex",
"string",
"substring"
] |
stackoverflow_0074426371_contains_python_regex_string_substring.txt
|
Q:
How to Toggle a UIMenu Action
I have a UIMenu with one of the options to be show / Hide. I want the title to be show when I am hiding some contents in the view and the title to be hide when I am showing some contents in the view. I added a global boolean to the viewcontroller class. Is there a way to toggle the title of the menu option based on the value of the global boolean?
let menu = UIMenu(title: "menu", children: [
UIAction(title: "show", handler: menuHandler)])
A:
You can use a variable in place of title string:
var theTitle: String = ""
if myBoolVar == true {
theTitle = "show"
} else {
theTitle = "hide"
}
let menu = UIMenu(title: "menu", children: [
UIAction(title: theTitle, handler: menuHandler)])
Using a ternary expression, that can be shortened to:
let theTitle: String = myBoolVar ? "show" : "hide"
let menu = UIMenu(title: "menu", children: [
UIAction(title: theTitle, handler: menuHandler)])
or even like this (although, it sacrifices some readability):
let menu = UIMenu(title: "menu", children: [
UIAction(title: myBoolVar ? "show" : "hide", handler: menuHandler)])
|
How to Toggle a UIMenu Action
|
I have a UIMenu with one of the options to be show / Hide. I want the title to be show when I am hiding some contents in the view and the title to be hide when I am showing some contents in the view. I added a global boolean to the viewcontroller class. Is there a way to toggle the title of the menu option based on the value of the global boolean?
let menu = UIMenu(title: "menu", children: [
UIAction(title: "show", handler: menuHandler)])
|
[
"You can use a variable in place of title string:\nvar theTitle: String = \"\"\n \nif myBoolVar == true {\n theTitle = \"show\"\n} else {\n theTitle = \"hide\"\n}\n \nlet menu = UIMenu(title: \"menu\", children: [\n UIAction(title: theTitle, handler: menuHandler)])\n\nUsing a ternary expression, that can be shortened to:\nlet theTitle: String = myBoolVar ? \"show\" : \"hide\"\n\nlet menu = UIMenu(title: \"menu\", children: [\n UIAction(title: theTitle, handler: menuHandler)])\n\nor even like this (although, it sacrifices some readability):\nlet menu = UIMenu(title: \"menu\", children: [\n UIAction(title: myBoolVar ? \"show\" : \"hide\", handler: menuHandler)])\n\n"
] |
[
0
] |
[] |
[] |
[
"swift",
"uikit",
"uiviewcontroller"
] |
stackoverflow_0074661481_swift_uikit_uiviewcontroller.txt
|
Q:
Append Functionality in Python is not working as desired
In the below program, I am trying to add all my "new_list" values into my "fin_list". But append is not working as excpected and is overwriting it with whatever the "new_list"value is in that particular loop.
def all_subsequences(ind, a, new_list, fin_list):
if(ind >= len(a)):
print(new_list)
fin_list.append(new_list)
return
new_list.append(a[ind])
all_subsequences(ind+1, a, new_list, fin_list)
#new_list.remove(new_list[len(new_list)-1])
new_list.pop()
all_subsequences(ind+1, a, new_list, fin_list)
return fin_list
a = [3,1,2]
new_list = []
final_list = []
result = all_subsequences(0, a, new_list, final_list)
print(result)
Here the output at each level is as below
[3, 1, 2], [3, 1], [3, 2], [3], [1, 2], [1], [2], []
Since the last value is an empty list the final list value at the last is as below
[[], [], [], [], [], [], [], []]
Link to python sandbox :-
https://pythonsandbox.com/code/pythonsandbox_u21270_9PqNjYIsl7M85NGf4GBSLLrW_v0.py
I have tried to use extend instead of append inside the base condition but that is not the kind of result i am looking for. I am open to any suggestion to resolve this problem.
A:
When you call fin_list.append(new_list), you are appending the reference of new_list to fin_list instead of copying fin_list. Therefore, when you do new_list.pop() later, if you print fin_list, you will find it's also changed.
The situation can be illustrated by this example:
foo = [1, 2, 3]
bar = []
bar.append(foo)
print(f"bar: {bar}")
# modify foo and you will find that bar is also modified
foo.append(4)
print(f"bar: {bar}")
The simplest way to solve the problem is to use fin_list.append(new_list[:]), which will copy new_list and append the copy to fin_list.
def all_subsequences(ind, a, new_list, fin_list):
if (ind >= len(a)):
print(new_list)
fin_list.append(new_list[:])
return
new_list.append(a[ind])
all_subsequences(ind+1, a, new_list, fin_list)
new_list.pop()
all_subsequences(ind+1, a, new_list, fin_list)
return fin_list
a = [3, 1, 2]
new_list = []
final_list = []
result = all_subsequences(0, a, new_list, final_list)
print(result)
|
Append Functionality in Python is not working as desired
|
In the below program, I am trying to add all my "new_list" values into my "fin_list". But append is not working as excpected and is overwriting it with whatever the "new_list"value is in that particular loop.
def all_subsequences(ind, a, new_list, fin_list):
if(ind >= len(a)):
print(new_list)
fin_list.append(new_list)
return
new_list.append(a[ind])
all_subsequences(ind+1, a, new_list, fin_list)
#new_list.remove(new_list[len(new_list)-1])
new_list.pop()
all_subsequences(ind+1, a, new_list, fin_list)
return fin_list
a = [3,1,2]
new_list = []
final_list = []
result = all_subsequences(0, a, new_list, final_list)
print(result)
Here the output at each level is as below
[3, 1, 2], [3, 1], [3, 2], [3], [1, 2], [1], [2], []
Since the last value is an empty list the final list value at the last is as below
[[], [], [], [], [], [], [], []]
Link to python sandbox :-
https://pythonsandbox.com/code/pythonsandbox_u21270_9PqNjYIsl7M85NGf4GBSLLrW_v0.py
I have tried to use extend instead of append inside the base condition but that is not the kind of result i am looking for. I am open to any suggestion to resolve this problem.
|
[
"When you call fin_list.append(new_list), you are appending the reference of new_list to fin_list instead of copying fin_list. Therefore, when you do new_list.pop() later, if you print fin_list, you will find it's also changed.\nThe situation can be illustrated by this example:\nfoo = [1, 2, 3]\nbar = []\nbar.append(foo)\nprint(f\"bar: {bar}\")\n\n# modify foo and you will find that bar is also modified\nfoo.append(4)\nprint(f\"bar: {bar}\")\n\nThe simplest way to solve the problem is to use fin_list.append(new_list[:]), which will copy new_list and append the copy to fin_list.\ndef all_subsequences(ind, a, new_list, fin_list):\n\n if (ind >= len(a)):\n print(new_list)\n fin_list.append(new_list[:])\n return\n\n new_list.append(a[ind])\n all_subsequences(ind+1, a, new_list, fin_list)\n new_list.pop()\n all_subsequences(ind+1, a, new_list, fin_list)\n\n return fin_list\n\n\na = [3, 1, 2]\nnew_list = []\nfinal_list = []\nresult = all_subsequences(0, a, new_list, final_list)\nprint(result)\n\n"
] |
[
0
] |
[] |
[] |
[
"append",
"empty_list",
"list",
"python_3.x"
] |
stackoverflow_0074656218_append_empty_list_list_python_3.x.txt
|
Q:
Problem with virtualenv in Mac OS X
I've installed virtualenv via pip and get this error after creating a new environment:
selenium:~ auser$ virtualenv new
New python executable in new/bin/python
ERROR: The executable new/bin/python is not functioning
ERROR: It thinks sys.prefix is u'/System/Library/Frameworks/Python.framework/ Versions/2.6' (should be '/Users/user/new')
ERROR: virtualenv is not compatible with this system or executable
In my environment:
PYTHONPATH=/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages
PATH=/System/Library/Frameworks/Python.framework/Versions/2.6/bin:/Library/Frameworks/Python.framework/Versions/2.6/bin:/Library/Frameworks/Python.framework/Versions/2.6/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin
How can I repair this?
Thanks.
A:
Just in case there's someone still seeking for the answer.
I ran into this same problem just today and realized since I already have Anaconda installed, I should not have used pip install virtualenv to install virtual environment as this would give me the error message when trying to initiate it later. Instead, I tried conda install virtualenv then entered virtualenv env_mysite and problem solved.
A:
Like @RyanWilcox mentioned, you might be inadvertently pointing virtualenv to the wrong Python installation. Virtualenv comes with a -p flag to let you specify which interpreter to use.
In my case,
virtualenv test_env
threw the same error as yours, while
virtualenv -p python test_env
worked perfectly.
If you call virtualenv -h, the documentation for the -p flag will tell you which python it thinks it should be using; if it looks wonky, try passing -p python. For reference, I'm on virtualenv 1.11.6.
A:
In case anyone in the future runs into this problem - this is caused by your default Python distribution being conda. Conda has it's own virtual env set up process but if you have the conda distribution of python and still wish to use virtualenv here's how:
Find the other python distribution on your machine: ls -ls /usr/bin/python*
Take note of the availble python version that is not conda and run the code below (note for python 3 and above you have to upgrade virtualenv first): virtualenv -p python2.7(or your python version) flaskapp
A:
I've run across this problem myself. I wrote down the instructions in a README, which I have pasted below....
I have found there are two things that work:
Make sure you're running the latest virtualenv (1.5.1, of this writting)
If you're using a non system Python as your standard Python (which python to check) Forcefully use the System supplied one.
Instead of virtualenv thing use /usr/bin/python2.6 PATH/TO/VIRTUALENV thing (or whatever which
python returned to you - this is what it did for me when I ran into this issue)
A:
I had the same problem and as I see it now, it was caused by a messy Python installation. I have OS X installed for over a year since I bought a new laptop and I have already installed and reinstalled Python for several times using different sources (official binaries, homebrew, official binaries + hand-made adjustments as described here). Don't ask me why I did that, I'm just a miserable newbie believing everything will fix itself after being re-installed.
So, I had a number of different Pythons installed here and there as well as many hardlinks pointing at them inconsistently. Eventually I got sick of all of them and reinstalled OS X carefully cleaned the system from all the Pythons I found using find utility. Also, I have unlinked all the links pointing to whatever Python from everywhere. Then I've installed a fresh Python using homebrew, installed virtualenv and everything works as a charm now.
So, my recipe is:
sudo find / -iname "python*" > python.log
Then analyze this file, remove and unlink everything related to the version of Python you need, reinstall it (I did it with homebrew, maybe official installation will also work) and enjoy. Make sure you unlink everything python-related from /usr/bin and /usr/local/bin as well as remove all the instances of Frameworks/Python.framework/Versions/<Your.Version> in /Library and /System/Library.
It may be a dirty hack, but it worked for me. I prefer not to keep any system-wide Python libraries except pip and virtualenv and create virtual environments for all of my projects, so I do not care about removing the important libraries. If you don't want to remove everything, still try to understand whether your Pythons are, what links point to them and from where. Then think what may cause the problem and fix it.
A:
I ran into a variation of this "not functioning" error.
I was trying to create an environment in a folder that included the path ".../Programming/Developing..." which is actually "/Users/eric/Documents/Programming:Developing/"
and got this error:
ImportError: No module named site
ERROR: The executable env/bin/python2.7 is not functioning
ERROR: It thinks sys.prefix is u'/Users/eric/Documents/Programming:Developing/heroku' (should be u'/Users/eric/Documents/Programming:Developing/heroku/env')
ERROR: virtualenv is not compatible with this system or executable
I tried the same in a different folder and it worked fine, no errors and env/bin has what I expect (activate, etc.).
A:
I got the same problem and I found that it happens when you do not specify the python executable name properly. So for python 2x, for example:
virtualenv --system-site-packages -p python mysite
But for python 3.6 you need to specify the executable name like python3.6
virtualenv --system-site-packages -p python3.6 mysite
A:
On on OSX 10.6.8 leopard, after having "upgraded" to Lion, then downgrading again (ouch - AVOID!), I went through the Wolf Paulus method a few months ago, completely ignorant of python. Deleted python 2.7 altogether and "replaced" it with 3.something. My FTP program stopped working (Fetch) and who knows what else relies on Python 2.7. So at that point I downloaded the latest version of 2.7 from python.org and it's installer got me up and running - until i tried to use virtualenv.
What seems to have worked for me this time was totally deleting Python 2.7 with this code:
sudo rm -R /System/Library/Frameworks/Python.framework/Versions/2.7
removing all the links with this code:
sudo rm /usr/bin/pydoc
sudo rm /usr/bin/python
sudo rm /usr/bin/pythonw
sudo rm /usr/bin/python-config
I had tried to install python with homebrew, but apparently it will not work unless all of XTools is installed, which I have been avoiding, since the version of XTools compatible with 10.6 is ancient and 4GB and mostly all I need is GCC, the compiler, which you can get here.
So I just installed with the latest download from python.org.
Then had to reinstall easy_install, pip, virtualenv.
Definitely wondering when it will be time for a new laptop, but there's a lot to be said for buying fewer pieces of hardware (slave labor, unethical mining, etc).
A:
The above solutions failed for me, but the following worked:
python3 -m venv --without-pip <ENVIRONMENT_NAME>
. <ENVIRONMENT_NAME>/bin/activate
curl https://bootstrap.pypa.io/get-pip.py | python
deactivate
It's hacky, but yes, the core problem really did just seem to be pip.
A:
I did the following steps to get virtualenv working :
Update virtualenv as follows :
==> sudo pip install --upgrade virtualenv
Initialize python3 virtualenv :
==> virtualenv -p python3 venv
A:
I had this same issue, and I can confirm that the problem was with an outdated virtualenv.py file.
It was not necessary to do a whole install --upgrade.
Replacing the virtualenv.py file with the most recent version sufficed.
A:
I also had this problem, and I tried the following method which worked for me:
conda install virtualenv
virtualenv --system-site-packages /anaconda/envs/tensorflow (here envs keeps all the virtual environments made by user)
source /anaconda/envs/tensorflow/bin/activate
Hope it's helpful.
A:
I had this same issue when trying to install py2.7 on a newer system. The root issue was that virtualenv was part of py3.7 and thus was not compatible:
$ virtualenv -p python2.7 env
Running virtualenv with interpreter /usr/local/bin/python2.7
New python executable in /Users/blah/env/bin/python
ERROR: The executable /Users/blah/env/bin/python is not functioning
ERROR: It thinks sys.prefix is u'/Library/Frameworks/Python.framework/Versions/2.7' (should be u'/Users/blah/env')
ERROR: virtualenv is not compatible with this system or executable
$ which virtualenv
/Library/Frameworks/Python.framework/Versions/3.7/bin/virtualenv
# install proper version of virtualenv
$ pip2.7 install virtualenv
$ /Library/Frameworks/Python.framework/Versions/2.7/bin/virtualenv -p python2.7 env
$ . ./env/bin/activate
(env) $
A:
If you continue to have trouble with virtualenv, you might try pythonbrew, instead. It's an alternate solution to the same problem. It works more like Ruby's rvm: It builds and creates an entire instance of Python, under $HOME/.pythonbrew, and then sets up some bash functions that allow you to switch easily between versions. Where virtualenv shadows the system version of Python, using symbolic links as part of its solution, pythonbrew builds entirely self-contained installations of Python.
I used virtualenv for years. It's a decent solution, but I've switched to pythonbrew lately. Having completely self-contained Python instances means that installing a new one takes awhile (since pythonbrew actually compiles Python from scratch), but the self-contained nature of each installation appeals to me. And disk is cheap.
|
Problem with virtualenv in Mac OS X
|
I've installed virtualenv via pip and get this error after creating a new environment:
selenium:~ auser$ virtualenv new
New python executable in new/bin/python
ERROR: The executable new/bin/python is not functioning
ERROR: It thinks sys.prefix is u'/System/Library/Frameworks/Python.framework/ Versions/2.6' (should be '/Users/user/new')
ERROR: virtualenv is not compatible with this system or executable
In my environment:
PYTHONPATH=/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages
PATH=/System/Library/Frameworks/Python.framework/Versions/2.6/bin:/Library/Frameworks/Python.framework/Versions/2.6/bin:/Library/Frameworks/Python.framework/Versions/2.6/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin
How can I repair this?
Thanks.
|
[
"Just in case there's someone still seeking for the answer.\nI ran into this same problem just today and realized since I already have Anaconda installed, I should not have used pip install virtualenv to install virtual environment as this would give me the error message when trying to initiate it later. Instead, I tried conda install virtualenv then entered virtualenv env_mysite and problem solved.\n",
"Like @RyanWilcox mentioned, you might be inadvertently pointing virtualenv to the wrong Python installation. Virtualenv comes with a -p flag to let you specify which interpreter to use.\nIn my case,\nvirtualenv test_env\n\nthrew the same error as yours, while\nvirtualenv -p python test_env\n\nworked perfectly.\nIf you call virtualenv -h, the documentation for the -p flag will tell you which python it thinks it should be using; if it looks wonky, try passing -p python. For reference, I'm on virtualenv 1.11.6.\n",
"In case anyone in the future runs into this problem - this is caused by your default Python distribution being conda. Conda has it's own virtual env set up process but if you have the conda distribution of python and still wish to use virtualenv here's how:\n\nFind the other python distribution on your machine: ls -ls /usr/bin/python*\nTake note of the availble python version that is not conda and run the code below (note for python 3 and above you have to upgrade virtualenv first): virtualenv -p python2.7(or your python version) flaskapp\n\n",
"I've run across this problem myself. I wrote down the instructions in a README, which I have pasted below....\nI have found there are two things that work:\n\nMake sure you're running the latest virtualenv (1.5.1, of this writting)\nIf you're using a non system Python as your standard Python (which python to check) Forcefully use the System supplied one.\nInstead of virtualenv thing use /usr/bin/python2.6 PATH/TO/VIRTUALENV thing (or whatever which \npython returned to you - this is what it did for me when I ran into this issue)\n\n",
"I had the same problem and as I see it now, it was caused by a messy Python installation. I have OS X installed for over a year since I bought a new laptop and I have already installed and reinstalled Python for several times using different sources (official binaries, homebrew, official binaries + hand-made adjustments as described here). Don't ask me why I did that, I'm just a miserable newbie believing everything will fix itself after being re-installed. \nSo, I had a number of different Pythons installed here and there as well as many hardlinks pointing at them inconsistently. Eventually I got sick of all of them and reinstalled OS X carefully cleaned the system from all the Pythons I found using find utility. Also, I have unlinked all the links pointing to whatever Python from everywhere. Then I've installed a fresh Python using homebrew, installed virtualenv and everything works as a charm now.\nSo, my recipe is:\nsudo find / -iname \"python*\" > python.log\nThen analyze this file, remove and unlink everything related to the version of Python you need, reinstall it (I did it with homebrew, maybe official installation will also work) and enjoy. Make sure you unlink everything python-related from /usr/bin and /usr/local/bin as well as remove all the instances of Frameworks/Python.framework/Versions/<Your.Version> in /Library and /System/Library. \nIt may be a dirty hack, but it worked for me. I prefer not to keep any system-wide Python libraries except pip and virtualenv and create virtual environments for all of my projects, so I do not care about removing the important libraries. If you don't want to remove everything, still try to understand whether your Pythons are, what links point to them and from where. Then think what may cause the problem and fix it.\n",
"I ran into a variation of this \"not functioning\" error.\nI was trying to create an environment in a folder that included the path \".../Programming/Developing...\" which is actually \"/Users/eric/Documents/Programming:Developing/\"\nand got this error:\nImportError: No module named site\nERROR: The executable env/bin/python2.7 is not functioning\nERROR: It thinks sys.prefix is u'/Users/eric/Documents/Programming:Developing/heroku' (should be u'/Users/eric/Documents/Programming:Developing/heroku/env')\nERROR: virtualenv is not compatible with this system or executable\n\nI tried the same in a different folder and it worked fine, no errors and env/bin has what I expect (activate, etc.).\n",
"I got the same problem and I found that it happens when you do not specify the python executable name properly. So for python 2x, for example:\nvirtualenv --system-site-packages -p python mysite\nBut for python 3.6 you need to specify the executable name like python3.6\nvirtualenv --system-site-packages -p python3.6 mysite \n",
"On on OSX 10.6.8 leopard, after having \"upgraded\" to Lion, then downgrading again (ouch - AVOID!), I went through the Wolf Paulus method a few months ago, completely ignorant of python. Deleted python 2.7 altogether and \"replaced\" it with 3.something. My FTP program stopped working (Fetch) and who knows what else relies on Python 2.7. So at that point I downloaded the latest version of 2.7 from python.org and it's installer got me up and running - until i tried to use virtualenv.\nWhat seems to have worked for me this time was totally deleting Python 2.7 with this code:\nsudo rm -R /System/Library/Frameworks/Python.framework/Versions/2.7\n\nremoving all the links with this code: \nsudo rm /usr/bin/pydoc\nsudo rm /usr/bin/python\nsudo rm /usr/bin/pythonw\nsudo rm /usr/bin/python-config\n\nI had tried to install python with homebrew, but apparently it will not work unless all of XTools is installed, which I have been avoiding, since the version of XTools compatible with 10.6 is ancient and 4GB and mostly all I need is GCC, the compiler, which you can get here.\nSo I just installed with the latest download from python.org.\nThen had to reinstall easy_install, pip, virtualenv.\nDefinitely wondering when it will be time for a new laptop, but there's a lot to be said for buying fewer pieces of hardware (slave labor, unethical mining, etc).\n",
"The above solutions failed for me, but the following worked:\npython3 -m venv --without-pip <ENVIRONMENT_NAME>\n. <ENVIRONMENT_NAME>/bin/activate\ncurl https://bootstrap.pypa.io/get-pip.py | python\ndeactivate\n\nIt's hacky, but yes, the core problem really did just seem to be pip.\n",
"I did the following steps to get virtualenv working : \nUpdate virtualenv as follows : \n==> sudo pip install --upgrade virtualenv\n\nInitialize python3 virtualenv :\n==> virtualenv -p python3 venv\n\n",
"I had this same issue, and I can confirm that the problem was with an outdated virtualenv.py file. \nIt was not necessary to do a whole install --upgrade. \nReplacing the virtualenv.py file with the most recent version sufficed. \n",
"I also had this problem, and I tried the following method which worked for me:\nconda install virtualenv\n\nvirtualenv --system-site-packages /anaconda/envs/tensorflow (here envs keeps all the virtual environments made by user)\nsource /anaconda/envs/tensorflow/bin/activate\n\nHope it's helpful.\n",
"I had this same issue when trying to install py2.7 on a newer system. The root issue was that virtualenv was part of py3.7 and thus was not compatible:\n$ virtualenv -p python2.7 env\nRunning virtualenv with interpreter /usr/local/bin/python2.7\nNew python executable in /Users/blah/env/bin/python\nERROR: The executable /Users/blah/env/bin/python is not functioning\nERROR: It thinks sys.prefix is u'/Library/Frameworks/Python.framework/Versions/2.7' (should be u'/Users/blah/env')\nERROR: virtualenv is not compatible with this system or executable\n\n$ which virtualenv\n/Library/Frameworks/Python.framework/Versions/3.7/bin/virtualenv\n\n# install proper version of virtualenv \n$ pip2.7 install virtualenv\n\n$ /Library/Frameworks/Python.framework/Versions/2.7/bin/virtualenv -p python2.7 env\n\n$ . ./env/bin/activate\n(env) $ \n\n",
"If you continue to have trouble with virtualenv, you might try pythonbrew, instead. It's an alternate solution to the same problem. It works more like Ruby's rvm: It builds and creates an entire instance of Python, under $HOME/.pythonbrew, and then sets up some bash functions that allow you to switch easily between versions. Where virtualenv shadows the system version of Python, using symbolic links as part of its solution, pythonbrew builds entirely self-contained installations of Python.\nI used virtualenv for years. It's a decent solution, but I've switched to pythonbrew lately. Having completely self-contained Python instances means that installing a new one takes awhile (since pythonbrew actually compiles Python from scratch), but the self-contained nature of each installation appeals to me. And disk is cheap.\n"
] |
[
109,
6,
5,
4,
3,
1,
1,
0,
0,
0,
0,
0,
0,
-3
] |
[
"Open terminal and type /Library/Frameworks/Python.framework/Versions/\nthen type ls /Library/Frameworks/Python.framework/Versions/2.7/bin/\n if you are using Python2(or any other else).\nEdit ~/.bash_profile and add the following line:\nexport PATH=$PATH:/Library/Frameworks/Python.framework/Versions/2.7/bin/\ncat ~/.bash_profile\n\nIn my case the content of ~/.bash_profile is as follows:\nexport PATH=$PATH:/Library/Frameworks/Python.framework/Versions/2.7/bin/\nNow the virtualenv command should work.\n"
] |
[
-1
] |
[
"macos",
"operating_system",
"python",
"virtualenv"
] |
stackoverflow_0005904319_macos_operating_system_python_virtualenv.txt
|
Q:
pybind11: How to organize pybind module under a namespace package
In the below example of the pybind tutorial, a dynamic library is build.
setup.py in https://github.com/pybind/python_example:
ext_modules = [
Pybind11Extension("python_example",
["src/main.cpp"],
...
),
]
setup(
ext_modules=ext_modules,
...
)
It can be imported like this:
import python_example
But this lives in the global namespace and I would like to organize it under a namepsace package like this:
import mypackage.python_example
It seems that regardless of where I put the main.cpp it will be always accessible under the global namespace. I am thinking of e.g. numpy, where everything is used as np.somefunction and never do I import from an other namespace.
A:
One can add a namespace in front of the module name.
Pybind11Extension("mypackage.python_example",
["src/main.cpp"],
...
)
But the name in PYBIND11_MODULE should stay as it is.
PYBIND11_MODULE(python_example, m) {
This will add a folder during the build: mypackage/python_example.cpython-38-x86_64-linux-gnu.so
This way you can import it like this:
import mypackage.python_example
Thanks to Marc Gliss for his answer in the comments.
|
pybind11: How to organize pybind module under a namespace package
|
In the below example of the pybind tutorial, a dynamic library is build.
setup.py in https://github.com/pybind/python_example:
ext_modules = [
Pybind11Extension("python_example",
["src/main.cpp"],
...
),
]
setup(
ext_modules=ext_modules,
...
)
It can be imported like this:
import python_example
But this lives in the global namespace and I would like to organize it under a namepsace package like this:
import mypackage.python_example
It seems that regardless of where I put the main.cpp it will be always accessible under the global namespace. I am thinking of e.g. numpy, where everything is used as np.somefunction and never do I import from an other namespace.
|
[
"One can add a namespace in front of the module name.\nPybind11Extension(\"mypackage.python_example\",\n [\"src/main.cpp\"],\n ...\n)\n\nBut the name in PYBIND11_MODULE should stay as it is.\nPYBIND11_MODULE(python_example, m) {\n\nThis will add a folder during the build: mypackage/python_example.cpython-38-x86_64-linux-gnu.so\nThis way you can import it like this:\nimport mypackage.python_example\n\nThanks to Marc Gliss for his answer in the comments.\n"
] |
[
0
] |
[] |
[] |
[
"c++",
"packaging",
"pybind11",
"python"
] |
stackoverflow_0074660906_c++_packaging_pybind11_python.txt
|
Q:
how list just one file from a (bash) shell directory listing
A bit lowly a query but here goes:
bash shell script. POSIX, Mint 21
I just want one/any (mp3) file from a directory. As a sample.
In normal execution, a full run, the code would be such
for f in *.mp3 do
#statements
done
This works fine but if I wanted to sample just one file of such an array/glob (?) without looping, how might I do that? I don't care which file, just that it is an mp3 from the directory I am working in.
Should I just start this for-loop and then exit(break) after one statement, or is there a neater way more tailored-for-the-job way?
for f in *.mp3 do
#statement
break
done
Ta (can not believe how dopey I feel asking this one, my forehead will hurt when I see the answers )
A:
I would do it like this in POSIX shell:
mp3file=
for f in *.mp3; do
if [ -f "$f" ]; then
mp3file=$f
break
fi
done
# At this point, the variable mp3file contains a filename which
# represents a regular file (or a symbolic link) with the .mp3
# extension, or empty string if there is no such a file.
A:
Since you are using Linux (Mint) you've got GNU find so one way to get one .mp3 file from the current directory is:
mp3file=$(find . -maxdepth 1 -mindepth 1 -name '*.mp3' -printf '%f' -quit)
-maxdepth 1 -mindepth 1 causes the search to be restricted to one level under the current directory.
-printf '%f' prints just the filename (e.g. foo.mp3). The -print option would print the path to the filename (e.g. ./foo.mp3). That may not matter to you.
-quit causes find to exit as soon as one match is found and printed.
Another option is to use the Bash : (colon) command and $_ (dollar underscore) special variable:
: *.mp3
mp3file=$_
: *.mp3 runs the : command with the list of .mp3 files in the current directory as arguments. The : command ignores its arguments and does nothing.
mp3file=$_ sets the value of the mp3file variable to the last argument supplied to the previous command (:).
The second option should not be used if the number of .mp3 files is large (hundreds or more) because it will find all of the files and sort them by name internally.
In both cases $mp3file should be checked to ensure that it really exists (e.g. [[ -e $mp3file ]]) before using it for anything else, in case there are no .mp3 files in the directory.
|
how list just one file from a (bash) shell directory listing
|
A bit lowly a query but here goes:
bash shell script. POSIX, Mint 21
I just want one/any (mp3) file from a directory. As a sample.
In normal execution, a full run, the code would be such
for f in *.mp3 do
#statements
done
This works fine but if I wanted to sample just one file of such an array/glob (?) without looping, how might I do that? I don't care which file, just that it is an mp3 from the directory I am working in.
Should I just start this for-loop and then exit(break) after one statement, or is there a neater way more tailored-for-the-job way?
for f in *.mp3 do
#statement
break
done
Ta (can not believe how dopey I feel asking this one, my forehead will hurt when I see the answers )
|
[
"I would do it like this in POSIX shell:\nmp3file=\nfor f in *.mp3; do\n if [ -f \"$f\" ]; then\n mp3file=$f\n break\n fi\ndone\n# At this point, the variable mp3file contains a filename which\n# represents a regular file (or a symbolic link) with the .mp3\n# extension, or empty string if there is no such a file.\n\n",
"Since you are using Linux (Mint) you've got GNU find so one way to get one .mp3 file from the current directory is:\nmp3file=$(find . -maxdepth 1 -mindepth 1 -name '*.mp3' -printf '%f' -quit)\n\n\n-maxdepth 1 -mindepth 1 causes the search to be restricted to one level under the current directory.\n-printf '%f' prints just the filename (e.g. foo.mp3). The -print option would print the path to the filename (e.g. ./foo.mp3). That may not matter to you.\n-quit causes find to exit as soon as one match is found and printed.\n\nAnother option is to use the Bash : (colon) command and $_ (dollar underscore) special variable:\n: *.mp3\nmp3file=$_\n\n\n: *.mp3 runs the : command with the list of .mp3 files in the current directory as arguments. The : command ignores its arguments and does nothing.\nmp3file=$_ sets the value of the mp3file variable to the last argument supplied to the previous command (:).\n\nThe second option should not be used if the number of .mp3 files is large (hundreds or more) because it will find all of the files and sort them by name internally.\nIn both cases $mp3file should be checked to ensure that it really exists (e.g. [[ -e $mp3file ]]) before using it for anything else, in case there are no .mp3 files in the directory.\n"
] |
[
1,
1
] |
[
"The fact that you use\nfor f in *.mp3 do\n\nsuggests to me, that the MP3s are named without to much strange characters in the filename.\nIn that case, if you really don't care which MP3, you could:\nf=$(ls *.mp3|head)\nstatement\n\nOr, if you want a different one every time:\nf=$(ls *.mp3|sort -R | tail -1)\n\nNote: if your filenames get more complicated (including spaces or other special characters), this will not work anymore.\n",
"Assuming you don't have spaces in your filenames, (and I don't understand why the collective taboo is against using ls in scripts at all, rather than not having spaces in filenames, personally) then:-\nls *.mp3 | tr ' ' '\\n' | sed -n '1p'\n\n"
] |
[
-1,
-1
] |
[
"bash",
"file_handling",
"shell"
] |
stackoverflow_0074652275_bash_file_handling_shell.txt
|
Q:
Building project
I am studying C# and have a task to implement two algorithms by two ways each one (for comparing these ways of realization) and perform this job in a nice form.
I would like to know which form will be better.
1 solution - 2 projects - 4 apps (for each way of realisation).
1 solution - 2 projects - 2 apps (one app for every task).
1 solution - 1 proj - 1 app (for all task).
Which topics shall I work out to understand mistakes in my points?
It would be greateful if you explained the best way of building this task, because it would help me to understand performing programming tasks way better.
I've had problems with setting namespaces, when I tried creating an app for each way of implementing.
A:
I think you're asking about how to structure your project, which might prompt a lot of varied opinions. Given your description, I would probably have 1 solution with 1 project. The different functions you need can all live inside your Program.cs or in one or more separate class files. It sounds like you're building a GUI app, so I would add buttons that invoke the different functions - one button for this function and another for that function.
Programming is part science, part art. There are many ways you can accomplish what you want to do; the "art" part of it is your style. Focus on making it work and then you can refactor to make it elegant.
|
Building project
|
I am studying C# and have a task to implement two algorithms by two ways each one (for comparing these ways of realization) and perform this job in a nice form.
I would like to know which form will be better.
1 solution - 2 projects - 4 apps (for each way of realisation).
1 solution - 2 projects - 2 apps (one app for every task).
1 solution - 1 proj - 1 app (for all task).
Which topics shall I work out to understand mistakes in my points?
It would be greateful if you explained the best way of building this task, because it would help me to understand performing programming tasks way better.
I've had problems with setting namespaces, when I tried creating an app for each way of implementing.
|
[
"I think you're asking about how to structure your project, which might prompt a lot of varied opinions. Given your description, I would probably have 1 solution with 1 project. The different functions you need can all live inside your Program.cs or in one or more separate class files. It sounds like you're building a GUI app, so I would add buttons that invoke the different functions - one button for this function and another for that function.\nProgramming is part science, part art. There are many ways you can accomplish what you want to do; the \"art\" part of it is your style. Focus on making it work and then you can refactor to make it elegant.\n"
] |
[
1
] |
[] |
[] |
[
"building",
"c#",
"structure"
] |
stackoverflow_0074655612_building_c#_structure.txt
|
Q:
How can I plot a Hamiltonian graph in R?
I have the following undirected graph (picture) that contains a cycle or a Hamiltonian path of length |V|= 8. The cycle (path) with no repeated edges and vertices is the red line. The adjacency matrix is :
A
B
C
D
E
F
G
H
A
0
1
0
1
1
0
0
0
B
1
0
1
0
0
1
0
0
C
0
1
0
1
0
0
0
1
D
1
0
1
0
0
0
1
0
E
1
0
0
0
0
1
1
0
F
0
1
0
0
1
0
0
1
G
0
0
0
1
1
0
0
1
H
0
0
1
0
0
1
1
0
How can I plot this graph in R ?
Ham = matrix(c(0,1,0,1,1,0,0,0,
1,0,1,0,0,1,0,0,
0,1,0,1,0,0,0,1,
1,0,1,0,0,0,1,0,
1,0,0,0,0,1,1,0,
0,1,0,0,1,0,0,1,
0,0,0,1,1,0,0,1,
0,0,1,0,0,1,1,0),8,8)
Ham
A:
Update
If you need only one of all the Hamilton circles, you can try graph.subisomorphic.lad (thanks for the advice from @Szabolcs), which speeds up a lot if you don't need to list out all the possibilities, e.g.,
g <- graph_from_adjacency_matrix(Ham, "undirected")
es <- graph.subisomorphic.lad(make_ring(vcount(g)), g)$map
g %>%
set_edge_attr("color", value = "black") %>%
set_edge_attr("color",
get.edge.ids(g, c(rbind(es, c(es[-1], es[1])))),
value = "red"
) %>%
plot()
If you want to find all Hamilton circles:
You should be aware of the fact that the Hamilton circle is isomorphic to a ring consisting of all vertices, so we can resort to subgraph_isomorphisms to find out all those kinds of "rings", e.g.,
g <- graph_from_adjacency_matrix(Ham, "undirected")
lst <- lapply(
subgraph_isomorphisms(make_ring(vcount(g)), g),
function(es) {
g %>%
set_edge_attr("color", value = "black") %>%
set_edge_attr("color",
get.edge.ids(g, c(rbind(es, c(es[-1], es[1])))),
value = "red"
)
}
)
where lst is a list of graphs, and you can see
plot(lst[[1]] gives
plot(lst[[2]] gives
and so on so forth.
A:
Given the red line and the picture, calculate the graph
with edges in order of occurrence.
## Make edge lists to prevent igraph from rearranging the edges.
elh <- as_edgelist(make_graph( ~ A-B-F-E-G-H-C-D) )
elr <- as_edgelist(make_graph( ~ D-A, A-E, B-C, F-H, G-D) )
g1 <- graph_from_edgelist(rbind(elh, elr), directed=FALSE )
Set the color to the first eight edges as red and otherwise black.
E(g1)$label <- paste("a", seq(ecount(g1)), sep = "")
E(g1)[ (1:8)]$color <- "red"
E(g1)[-(1:8)]$color <- "black"
Map vertices to x,y coordinates.
layout_as_homer <- matrix( c( -4,4, 4,4, 4,-4, -4,-4 # A:D.
, -2,2, 2,2, -2,-2, 2,-2 # E:H.
)
, ncol=2, byrow=TRUE
)
Reorder vertices in alphabetical order and plot.
g2 <- permute(g1, match(V(g1)$name, LETTERS[1:8]))
plot(g2, layout=layout_as_homer, edge.width=3, edge.label.cex = 1.5)
g2[] # adjacency matrix
Output.
8 x 8 sparse Matrix of class "dgCMatrix"
A B C D E F G H
A . 1 . 1 1 . . .
B 1 . 1 . . 1 . .
C . 1 . 1 . . . 1
D 1 . 1 . . . 1 .
E 1 . . . . 1 1 .
F . 1 . . 1 . . 1
G . . . 1 1 . . 1
H . . 1 . . 1 1 .
Planar layout of hyper graph Q3.
A:
An alternative is not to rely on igraphs internal ordering.
q3 <- make_graph(~ A-B-C-D-A
, a-b-c-d-a
, A-a, B-b, C-c, D-d
)
q3$main = "Planar layout of hyper graph Q3"
E(q3)$label <- paste("a", seq(ecount(q3)), sep = "")
hp <- c( "A","B", "B","b", "b","a", "a","d"
, "d","c", "c","C", "C","D", "D","A"
)
E(q3)[ get.edge.ids(q3, hp)]$color <- "red"
E(q3)[-get.edge.ids(q3, hp)]$color <- "black"
layout_as_homer <- matrix( c( -4,4, 4,4, 4,-4, -4,-4 # A:D
, -2,2, 2,2, 2,-2, -2,-2 # a:d
)
, ncol=2, byrow=TRUE
)
plot(q3, layout=layout_as_homer, edge.width=3, edge.label.cex = 1.5)
q3[]
|
How can I plot a Hamiltonian graph in R?
|
I have the following undirected graph (picture) that contains a cycle or a Hamiltonian path of length |V|= 8. The cycle (path) with no repeated edges and vertices is the red line. The adjacency matrix is :
A
B
C
D
E
F
G
H
A
0
1
0
1
1
0
0
0
B
1
0
1
0
0
1
0
0
C
0
1
0
1
0
0
0
1
D
1
0
1
0
0
0
1
0
E
1
0
0
0
0
1
1
0
F
0
1
0
0
1
0
0
1
G
0
0
0
1
1
0
0
1
H
0
0
1
0
0
1
1
0
How can I plot this graph in R ?
Ham = matrix(c(0,1,0,1,1,0,0,0,
1,0,1,0,0,1,0,0,
0,1,0,1,0,0,0,1,
1,0,1,0,0,0,1,0,
1,0,0,0,0,1,1,0,
0,1,0,0,1,0,0,1,
0,0,0,1,1,0,0,1,
0,0,1,0,0,1,1,0),8,8)
Ham
|
[
"Update\nIf you need only one of all the Hamilton circles, you can try graph.subisomorphic.lad (thanks for the advice from @Szabolcs), which speeds up a lot if you don't need to list out all the possibilities, e.g.,\ng <- graph_from_adjacency_matrix(Ham, \"undirected\")\nes <- graph.subisomorphic.lad(make_ring(vcount(g)), g)$map\ng %>%\n set_edge_attr(\"color\", value = \"black\") %>%\n set_edge_attr(\"color\",\n get.edge.ids(g, c(rbind(es, c(es[-1], es[1])))),\n value = \"red\"\n ) %>%\n plot()\n\n\nIf you want to find all Hamilton circles:\nYou should be aware of the fact that the Hamilton circle is isomorphic to a ring consisting of all vertices, so we can resort to subgraph_isomorphisms to find out all those kinds of \"rings\", e.g.,\ng <- graph_from_adjacency_matrix(Ham, \"undirected\")\nlst <- lapply(\n subgraph_isomorphisms(make_ring(vcount(g)), g),\n function(es) {\n g %>%\n set_edge_attr(\"color\", value = \"black\") %>%\n set_edge_attr(\"color\",\n get.edge.ids(g, c(rbind(es, c(es[-1], es[1])))),\n value = \"red\"\n )\n }\n)\n\nwhere lst is a list of graphs, and you can see\n\nplot(lst[[1]] gives\n\nplot(lst[[2]] gives\n\n\nand so on so forth.\n",
"Given the red line and the picture, calculate the graph\nwith edges in order of occurrence.\n## Make edge lists to prevent igraph from rearranging the edges.\nelh <- as_edgelist(make_graph( ~ A-B-F-E-G-H-C-D) )\nelr <- as_edgelist(make_graph( ~ D-A, A-E, B-C, F-H, G-D) )\ng1 <- graph_from_edgelist(rbind(elh, elr), directed=FALSE )\n\nSet the color to the first eight edges as red and otherwise black.\nE(g1)$label <- paste(\"a\", seq(ecount(g1)), sep = \"\")\nE(g1)[ (1:8)]$color <- \"red\" \nE(g1)[-(1:8)]$color <- \"black\"\n\nMap vertices to x,y coordinates.\nlayout_as_homer <- matrix( c( -4,4, 4,4, 4,-4, -4,-4 # A:D.\n , -2,2, 2,2, -2,-2, 2,-2 # E:H.\n )\n , ncol=2, byrow=TRUE\n )\n\nReorder vertices in alphabetical order and plot.\ng2 <- permute(g1, match(V(g1)$name, LETTERS[1:8]))\nplot(g2, layout=layout_as_homer, edge.width=3, edge.label.cex = 1.5)\ng2[] # adjacency matrix\n\nOutput.\n8 x 8 sparse Matrix of class \"dgCMatrix\"\n A B C D E F G H\nA . 1 . 1 1 . . .\nB 1 . 1 . . 1 . .\nC . 1 . 1 . . . 1\nD 1 . 1 . . . 1 .\nE 1 . . . . 1 1 .\nF . 1 . . 1 . . 1\nG . . . 1 1 . . 1\nH . . 1 . . 1 1 .\n\nPlanar layout of hyper graph Q3.\n\n",
"An alternative is not to rely on igraphs internal ordering.\nq3 <- make_graph(~ A-B-C-D-A\n , a-b-c-d-a\n , A-a, B-b, C-c, D-d\n )\nq3$main = \"Planar layout of hyper graph Q3\"\nE(q3)$label <- paste(\"a\", seq(ecount(q3)), sep = \"\")\n\nhp <- c( \"A\",\"B\", \"B\",\"b\", \"b\",\"a\", \"a\",\"d\"\n , \"d\",\"c\", \"c\",\"C\", \"C\",\"D\", \"D\",\"A\"\n )\n\nE(q3)[ get.edge.ids(q3, hp)]$color <- \"red\"\nE(q3)[-get.edge.ids(q3, hp)]$color <- \"black\"\n\nlayout_as_homer <- matrix( c( -4,4, 4,4, 4,-4, -4,-4 # A:D\n , -2,2, 2,2, 2,-2, -2,-2 # a:d\n )\n , ncol=2, byrow=TRUE\n ) \nplot(q3, layout=layout_as_homer, edge.width=3, edge.label.cex = 1.5)\nq3[]\n\n"
] |
[
3,
1,
1
] |
[] |
[] |
[
"graph",
"graph_theory",
"hamiltonian_cycle",
"igraph",
"r"
] |
stackoverflow_0074646363_graph_graph_theory_hamiltonian_cycle_igraph_r.txt
|
Q:
Trying to create a sliding window that checks for repeats in a DNA sequence
I'm trying to write a bioinformatics code that will check for certain repeats in a given string of nucleotides. The user inputs a certain patter, and the program outputs how many times something is repeated, or even highlights where they are. I've gotten a good start on it, but could use some help.
Below is my code so far.
while True:
text = 'AGACGCCTGGGAACTGCGGCCGCGGGCTCGCGCTCCTCGCCAGGCCCTGCCGCCGGGCTGCCATCCTTGCCCTGCCATGTCTCGCCGGAAGCCTGCGTCGGGCGGCCTCGCTGCCTCCAGCTCAGCCCCTGCGAGGCAAGCGGTTTTGAGCCGATTCTTCCAGTCTACGGGAAGCCTGAAATCCACCTCCTCCTCCACAGGTGCAGCCGACCAGGTGGACCCTGGCGCTgcagcggctgcagcggccgcagcggccgcagcgCCCCCAGCGCCCCCAGCTCCCGCCTTCCCGCCCCAGCTGCCGCCGCACATA'
print ("Input Pattern:")
pattern = input("")
def pattern_count(text, pattern):
count = 0
for i in range(len(text) - len(pattern) + 1):
if text[i: i + len(pattern)] == pattern:
count = count + 1
return count
print(pattern_count(text, pattern))
The issue lies in in the fact that I can only put the input from the beginning (ex. AGA or AGAC) to get an output. Any help or recommendations would be greatly appreciated. Thank you so much!
A:
One possibility is to use re.findall:
import re
text = 'AGACGCCTGGGAACTGCGGCCGCGGGCTCGCGCTCCTCGCCAGGCCCTGCCGCCGGGCTGCCATCCTTGCCCTGCCATGTCTCGCCGGAAGCCTGCGTCGGGCGGCCTCGCTGCCTCCAGCTCAGCCCCTGCGAGGCAAGCGGTTTTGAGCCGATTCTTCCAGTCTACGGGAAGCCTGAAATCCACCTCCTCCTCCACAGGTGCAGCCGACCAGGTGGACCCTGGCGCTgcagcggctgcagcggccgcagcggccgcagcgCCCCCAGCGCCCCCAGCTCCCGCCTTCCCGCCCCAGCTGCCGCCGCACATA'
pattern = "CCT"
count = sum(1 for _ in re.findall(pattern, text))
The sum(1 for ...) is a common pattern to count the number of items, a generator returns. See e.g. this answer.
A:
Here is a modified version of your code that will allow the user to input a string of nucleotides and a pattern to search for. It will then output the number of times the pattern appears in the string. Note that this code is case sensitive, so "AGC" and "agc" will be treated as different patterns.
def pattern_count(text, pattern):
count = 0
for i in range(len(text) - len(pattern) + 1):
if text[i: i + len(pattern)] == pattern:
count = count + 1
return count
while True:
print("Input the string of nucleotides:")
text = input()
print("Input the pattern to search for:")
pattern = input()
count = pattern_count(text, pattern)
print("The pattern appears {} times in the string.".format(count))
One potential optimization you could make to your code is to use the built-in count() method to count the number of times a pattern appears in a string. This would avoid the need to loop over the string and check each substring manually. Here is how you could modify your code to use this method:
def pattern_count(text, pattern):
return text.count(pattern)
while True:
print("Input the string of nucleotides:")
text = input()
print("Input the pattern to search for:")
pattern = input()
count = pattern_count(text, pattern)
print("The pattern appears {} times in the string.".format(count))
A:
Here's a fixed version of your code:
def pattern_count(text, pattern):
count = 0
for i in range(len(text) - len(pattern) + 1):
if text[i: i + len(pattern)] == pattern:
count += 1
return count
while True:
text = 'AGACGCCTGGGAACTGCGGCCGCGGGCTCGCGCTCCTCGCCAGGCCCTGCCGCCGGGCTGCCATCCTTGCCCTGCCATGTCTCGCCGGAAGCCTGCGTCGGGCGGCCTCGCTGCCTCCAGCTCAGCCCCTGCGAGGCAAGCGGTTTTGAGCCGATTCTTCCAGTCTACGGGAAGCCTGAAATCCACCTCCTCCTCCACAGGTGCAGCCGACCAGGTGGACCCTGGCGCTgcagcggctgcagcggccgcagcggccgcagcgCCCCCAGCGCCCCCAGCTCCCGCCTTCCCGCCCCAGCTGCCGCCGCACATA'
print("Input Pattern:")
pattern = input("")
print(pattern_count(text, pattern))
The issues with your code were that you had an extra indentation in the for loop, which caused the return statement to be executed after the first iteration of the loop, instead of after all iterations. I also added a += operator to increase the count, instead of overwriting the count with the result of count + 1. Finally, I moved the return statement outside the for loop, so that it returns the count after all iterations of the loop have been completed.
|
Trying to create a sliding window that checks for repeats in a DNA sequence
|
I'm trying to write a bioinformatics code that will check for certain repeats in a given string of nucleotides. The user inputs a certain patter, and the program outputs how many times something is repeated, or even highlights where they are. I've gotten a good start on it, but could use some help.
Below is my code so far.
while True:
text = 'AGACGCCTGGGAACTGCGGCCGCGGGCTCGCGCTCCTCGCCAGGCCCTGCCGCCGGGCTGCCATCCTTGCCCTGCCATGTCTCGCCGGAAGCCTGCGTCGGGCGGCCTCGCTGCCTCCAGCTCAGCCCCTGCGAGGCAAGCGGTTTTGAGCCGATTCTTCCAGTCTACGGGAAGCCTGAAATCCACCTCCTCCTCCACAGGTGCAGCCGACCAGGTGGACCCTGGCGCTgcagcggctgcagcggccgcagcggccgcagcgCCCCCAGCGCCCCCAGCTCCCGCCTTCCCGCCCCAGCTGCCGCCGCACATA'
print ("Input Pattern:")
pattern = input("")
def pattern_count(text, pattern):
count = 0
for i in range(len(text) - len(pattern) + 1):
if text[i: i + len(pattern)] == pattern:
count = count + 1
return count
print(pattern_count(text, pattern))
The issue lies in in the fact that I can only put the input from the beginning (ex. AGA or AGAC) to get an output. Any help or recommendations would be greatly appreciated. Thank you so much!
|
[
"One possibility is to use re.findall:\nimport re\ntext = 'AGACGCCTGGGAACTGCGGCCGCGGGCTCGCGCTCCTCGCCAGGCCCTGCCGCCGGGCTGCCATCCTTGCCCTGCCATGTCTCGCCGGAAGCCTGCGTCGGGCGGCCTCGCTGCCTCCAGCTCAGCCCCTGCGAGGCAAGCGGTTTTGAGCCGATTCTTCCAGTCTACGGGAAGCCTGAAATCCACCTCCTCCTCCACAGGTGCAGCCGACCAGGTGGACCCTGGCGCTgcagcggctgcagcggccgcagcggccgcagcgCCCCCAGCGCCCCCAGCTCCCGCCTTCCCGCCCCAGCTGCCGCCGCACATA'\npattern = \"CCT\"\ncount = sum(1 for _ in re.findall(pattern, text))\n\nThe sum(1 for ...) is a common pattern to count the number of items, a generator returns. See e.g. this answer.\n",
"Here is a modified version of your code that will allow the user to input a string of nucleotides and a pattern to search for. It will then output the number of times the pattern appears in the string. Note that this code is case sensitive, so \"AGC\" and \"agc\" will be treated as different patterns.\ndef pattern_count(text, pattern):\n count = 0\n for i in range(len(text) - len(pattern) + 1):\n if text[i: i + len(pattern)] == pattern:\n count = count + 1\n return count\n\nwhile True:\n print(\"Input the string of nucleotides:\")\n text = input()\n\n print(\"Input the pattern to search for:\")\n pattern = input()\n\n count = pattern_count(text, pattern)\n print(\"The pattern appears {} times in the string.\".format(count))\n\nOne potential optimization you could make to your code is to use the built-in count() method to count the number of times a pattern appears in a string. This would avoid the need to loop over the string and check each substring manually. Here is how you could modify your code to use this method:\ndef pattern_count(text, pattern):\n return text.count(pattern)\n\nwhile True:\n print(\"Input the string of nucleotides:\")\n text = input()\n\n print(\"Input the pattern to search for:\")\n pattern = input()\n\n count = pattern_count(text, pattern)\n print(\"The pattern appears {} times in the string.\".format(count))\n\n",
"Here's a fixed version of your code:\ndef pattern_count(text, pattern):\n count = 0\n for i in range(len(text) - len(pattern) + 1):\n if text[i: i + len(pattern)] == pattern:\n count += 1\n return count\n\n\nwhile True:\n text = 'AGACGCCTGGGAACTGCGGCCGCGGGCTCGCGCTCCTCGCCAGGCCCTGCCGCCGGGCTGCCATCCTTGCCCTGCCATGTCTCGCCGGAAGCCTGCGTCGGGCGGCCTCGCTGCCTCCAGCTCAGCCCCTGCGAGGCAAGCGGTTTTGAGCCGATTCTTCCAGTCTACGGGAAGCCTGAAATCCACCTCCTCCTCCACAGGTGCAGCCGACCAGGTGGACCCTGGCGCTgcagcggctgcagcggccgcagcggccgcagcgCCCCCAGCGCCCCCAGCTCCCGCCTTCCCGCCCCAGCTGCCGCCGCACATA'\n print(\"Input Pattern:\")\n pattern = input(\"\")\n\n print(pattern_count(text, pattern))\n\nThe issues with your code were that you had an extra indentation in the for loop, which caused the return statement to be executed after the first iteration of the loop, instead of after all iterations. I also added a += operator to increase the count, instead of overwriting the count with the result of count + 1. Finally, I moved the return statement outside the for loop, so that it returns the count after all iterations of the loop have been completed.\n"
] |
[
1,
0,
0
] |
[] |
[] |
[
"bioinformatics",
"biopython",
"dna_sequence",
"python",
"repeat"
] |
stackoverflow_0074659092_bioinformatics_biopython_dna_sequence_python_repeat.txt
|
Q:
Command not found - installing ganache-cli with yarn on Visual Studio
I've installed nodeJS in the terminal of Visual Studio version :
v16.13.1
Yarn
1.22.17
Ganache-cli
MacBook:web3_py_simple_storage myName$ yarn global add ganache-cli
warning ../package.json: No license field
[1/4] Resolving packages...
[2/4] Fetching packages...
[3/4] Linking dependencies...
[4/4] Building fresh packages...
success Installed "[email protected]" with binaries:
- ganache-cli
✨ Done in 1.10s.```
BUT when I try to do `ganache-cli --version`, I always received the same msg `bash: ganache-cli: command not found`... I maybe think it's a path problem but I tried a lot and a lot of solution and still nothing..
I really thanks a lot in advance the guy who will help me !
A:
I encountered the same issue earlier today, got it solved by typing this command in my visual studio terminal
npm install -g ganache-cli
(Note: You must have Nodejs already installed)
After the installation simply run the command below
ganache-cli --version
to check if it was installed properly
A:
I used npm install -g ganache and it solved my problem, but when I want to start Ganache-cli, run with npx ganache-cli.
A:
you might have to add it to path
C:\Users\userName\AppData\Local\Yarn\bin
|
Command not found - installing ganache-cli with yarn on Visual Studio
|
I've installed nodeJS in the terminal of Visual Studio version :
v16.13.1
Yarn
1.22.17
Ganache-cli
MacBook:web3_py_simple_storage myName$ yarn global add ganache-cli
warning ../package.json: No license field
[1/4] Resolving packages...
[2/4] Fetching packages...
[3/4] Linking dependencies...
[4/4] Building fresh packages...
success Installed "[email protected]" with binaries:
- ganache-cli
✨ Done in 1.10s.```
BUT when I try to do `ganache-cli --version`, I always received the same msg `bash: ganache-cli: command not found`... I maybe think it's a path problem but I tried a lot and a lot of solution and still nothing..
I really thanks a lot in advance the guy who will help me !
|
[
"I encountered the same issue earlier today, got it solved by typing this command in my visual studio terminal\nnpm install -g ganache-cli\n(Note: You must have Nodejs already installed)\nAfter the installation simply run the command below\nganache-cli --version\nto check if it was installed properly\n",
"I used npm install -g ganache and it solved my problem, but when I want to start Ganache-cli, run with npx ganache-cli.\n",
"you might have to add it to path\nC:\\Users\\userName\\AppData\\Local\\Yarn\\bin\n\n"
] |
[
2,
0,
0
] |
[] |
[] |
[
"ganache",
"installation",
"python",
"terminal",
"visual_studio"
] |
stackoverflow_0070599723_ganache_installation_python_terminal_visual_studio.txt
|
Q:
How to import a scss file into a typescript file
I am trying to import a .scss file into a .tsx typescript file.
However i get the following error. When i run npm run tsc
src/public/app1/index.tsx:5:20 - error TS2307: Cannot find module
'./index.scss'.
5 import styles from "./index.scss";
~~~~~~~~~~~~~~
Found 1 error.
index.tsx
import React from "react";
import { render } from "react-dom";
import App from "./App";
import styles from "./index.scss";
render(
<App/>,
document.getElementById("root"),
);
index.scss
body {
background:red;
color: #005CC5;
text-align: center;
}
I saw one solution was to create a decleration.d.ts file in the root of my project. But this has not worked for me. Can't import CSS/SCSS modules. TypeScript says "Cannot Find Module"
decleration.d.ts
declare module '*.scss';
tsconfig.json
{
"compilerOptions": {
/* Basic Options */
// "incremental": true, /* Enable incremental compilation */
"target": "es6", /* Specify ECMAScript target version: 'ES3' (default), 'ES5', 'ES2015', 'ES2016', 'ES2017', 'ES2018', 'ES2019' or 'ESNEXT'. */
"module": "commonjs", /* Specify module code generation: 'none', 'commonjs', 'amd', 'system', 'umd', 'es2015', or 'ESNext'. */
// "lib": [], /* Specify library files to be included in the compilation. */
// "allowJs": true, /* Allow javascript files to be compiled. */
// "checkJs": true, /* Report errors in .js files. */
"jsx": "react", /* Specify JSX code generation: 'preserve', 'react-native', or 'react'. */
// "declaration": true, /* Generates corresponding '.d.ts' file. */
// "declarationMap": true, /* Generates a sourcemap for each corresponding '.d.ts' file. */
// "sourceMap": true, /* Generates corresponding '.map' file. */
// "outFile": "./", /* Concatenate and emit output to single file. */
"outDir": "./build", /* Redirect output structure to the directory. */
"rootDir": "./src", /* Specify the root directory of input files. Use to control the output directory structure with --outDir. */
// "composite": true, /* Enable project compilation */
// "tsBuildInfoFile": "./", /* Specify file to store incremental compilation information */
// "removeComments": true, /* Do not emit comments to output. */
// "noEmit": true, /* Do not emit outputs. */
// "importHelpers": true, /* Import emit helpers from 'tslib'. */
// "downlevelIteration": true, /* Provide full support for iterables in 'for-of', spread, and destructuring when targeting 'ES5' or 'ES3'. */
// "isolatedModules": true, /* Transpile each file as a separate module (similar to 'ts.transpileModule'). */
/* Strict Type-Checking Options */
"strict": true, /* Enable all strict type-checking options. */
// "noImplicitAny": true, /* Raise error on expressions and declarations with an implied 'any' type. */
// "strictNullChecks": true, /* Enable strict null checks. */
// "strictFunctionTypes": true, /* Enable strict checking of function types. */
// "strictBindCallApply": true, /* Enable strict 'bind', 'call', and 'apply' methods on functions. */
// "strictPropertyInitialization": true, /* Enable strict checking of property initialization in classes. */
// "noImplicitThis": true, /* Raise error on 'this' expressions with an implied 'any' type. */
// "alwaysStrict": true, /* Parse in strict mode and emit "use strict" for each source file. */
/* Additional Checks */
// "noUnusedLocals": true, /* Report errors on unused locals. */
// "noUnusedParameters": true, /* Report errors on unused parameters. */
// "noImplicitReturns": true, /* Report error when not all code paths in function return a value. */
// "noFallthroughCasesInSwitch": true, /* Report errors for fallthrough cases in switch statement. */
/* Module Resolution Options */
"moduleResolution": "node", /* Specify module resolution strategy: 'node' (Node.js) or 'classic' (TypeScript pre-1.6). */
"baseUrl": "./src", /* Base directory to resolve non-absolute module names. */
// "paths": {}, /* A series of entries which re-map imports to lookup locations relative to the 'baseUrl'. */
// "rootDirs": [], /* List of root folders whose combined content represents the structure of the project at runtime. */
// "typeRoots": [], /* List of folders to include type definitions from. */
// "types": [], /* Type declaration files to be included in compilation. */
// "allowSyntheticDefaultImports": true, /* Allow default imports from modules with no default export. This does not affect code emit, just typechecking. */
"esModuleInterop": true /* Enables emit interoperability between CommonJS and ES Modules via creation of namespace objects for all imports. Implies 'allowSyntheticDefaultImports'. */
// "preserveSymlinks": true, /* Do not resolve the real path of symlinks. */
// "allowUmdGlobalAccess": true, /* Allow accessing UMD globals from modules. */
/* Source Map Options */
// "sourceRoot": "", /* Specify the location where debugger should locate TypeScript files instead of source locations. */
// "mapRoot": "", /* Specify the location where debugger should locate map files instead of generated locations. */
// "inlineSourceMap": true, /* Emit a single file with source maps instead of having a separate file. */
// "inlineSources": true, /* Emit the source alongside the sourcemaps within a single file; requires '--inlineSourceMap' or '--sourceMap' to be set. */
/* Experimental Options */
// "experimentalDecorators": true, /* Enables experimental support for ES7 decorators. */
// "emitDecoratorMetadata": true, /* Enables experimental support for emitting type metadata for decorators. */
},
"include": [
"src",
],
"exclude": [
"node_modules"
]
}
A:
As OP mentioned, the current solution is to add a file called global.d.ts to your src dir with the following contents
declare module '*.scss';
In my case, I'm using rollup-plugin-lit-css to import CSS as js objects. I had to create a /typings/declarations.d.ts file,
declare module '*.css' {
import { CSSResult } from 'lit-element';
const css: CSSResult;
export default css;
}
and reference it in tsconfig like so:
{
"include": [ "typings", "src" ],
"compilerOptions": {
"target": "ESNext",
"module": "CommonJS",
"esModuleInterop": true,
"noEmit": true,
"allowJs": true,
"checkJs": true
}
}
A:
Have you just tried import "./index.scss"; ?
A:
I had this same issue and created a declarations file in the root directory and then included it in the tsconfig.json.
declarations.d.ts
declare module "*.scss" {
const content: {[className: string]: string};
export = content;
}
tsconfig.json
"include": [
"src/**/*.ts",
"declarations.d.ts"
]
|
How to import a scss file into a typescript file
|
I am trying to import a .scss file into a .tsx typescript file.
However i get the following error. When i run npm run tsc
src/public/app1/index.tsx:5:20 - error TS2307: Cannot find module
'./index.scss'.
5 import styles from "./index.scss";
~~~~~~~~~~~~~~
Found 1 error.
index.tsx
import React from "react";
import { render } from "react-dom";
import App from "./App";
import styles from "./index.scss";
render(
<App/>,
document.getElementById("root"),
);
index.scss
body {
background:red;
color: #005CC5;
text-align: center;
}
I saw one solution was to create a decleration.d.ts file in the root of my project. But this has not worked for me. Can't import CSS/SCSS modules. TypeScript says "Cannot Find Module"
decleration.d.ts
declare module '*.scss';
tsconfig.json
{
"compilerOptions": {
/* Basic Options */
// "incremental": true, /* Enable incremental compilation */
"target": "es6", /* Specify ECMAScript target version: 'ES3' (default), 'ES5', 'ES2015', 'ES2016', 'ES2017', 'ES2018', 'ES2019' or 'ESNEXT'. */
"module": "commonjs", /* Specify module code generation: 'none', 'commonjs', 'amd', 'system', 'umd', 'es2015', or 'ESNext'. */
// "lib": [], /* Specify library files to be included in the compilation. */
// "allowJs": true, /* Allow javascript files to be compiled. */
// "checkJs": true, /* Report errors in .js files. */
"jsx": "react", /* Specify JSX code generation: 'preserve', 'react-native', or 'react'. */
// "declaration": true, /* Generates corresponding '.d.ts' file. */
// "declarationMap": true, /* Generates a sourcemap for each corresponding '.d.ts' file. */
// "sourceMap": true, /* Generates corresponding '.map' file. */
// "outFile": "./", /* Concatenate and emit output to single file. */
"outDir": "./build", /* Redirect output structure to the directory. */
"rootDir": "./src", /* Specify the root directory of input files. Use to control the output directory structure with --outDir. */
// "composite": true, /* Enable project compilation */
// "tsBuildInfoFile": "./", /* Specify file to store incremental compilation information */
// "removeComments": true, /* Do not emit comments to output. */
// "noEmit": true, /* Do not emit outputs. */
// "importHelpers": true, /* Import emit helpers from 'tslib'. */
// "downlevelIteration": true, /* Provide full support for iterables in 'for-of', spread, and destructuring when targeting 'ES5' or 'ES3'. */
// "isolatedModules": true, /* Transpile each file as a separate module (similar to 'ts.transpileModule'). */
/* Strict Type-Checking Options */
"strict": true, /* Enable all strict type-checking options. */
// "noImplicitAny": true, /* Raise error on expressions and declarations with an implied 'any' type. */
// "strictNullChecks": true, /* Enable strict null checks. */
// "strictFunctionTypes": true, /* Enable strict checking of function types. */
// "strictBindCallApply": true, /* Enable strict 'bind', 'call', and 'apply' methods on functions. */
// "strictPropertyInitialization": true, /* Enable strict checking of property initialization in classes. */
// "noImplicitThis": true, /* Raise error on 'this' expressions with an implied 'any' type. */
// "alwaysStrict": true, /* Parse in strict mode and emit "use strict" for each source file. */
/* Additional Checks */
// "noUnusedLocals": true, /* Report errors on unused locals. */
// "noUnusedParameters": true, /* Report errors on unused parameters. */
// "noImplicitReturns": true, /* Report error when not all code paths in function return a value. */
// "noFallthroughCasesInSwitch": true, /* Report errors for fallthrough cases in switch statement. */
/* Module Resolution Options */
"moduleResolution": "node", /* Specify module resolution strategy: 'node' (Node.js) or 'classic' (TypeScript pre-1.6). */
"baseUrl": "./src", /* Base directory to resolve non-absolute module names. */
// "paths": {}, /* A series of entries which re-map imports to lookup locations relative to the 'baseUrl'. */
// "rootDirs": [], /* List of root folders whose combined content represents the structure of the project at runtime. */
// "typeRoots": [], /* List of folders to include type definitions from. */
// "types": [], /* Type declaration files to be included in compilation. */
// "allowSyntheticDefaultImports": true, /* Allow default imports from modules with no default export. This does not affect code emit, just typechecking. */
"esModuleInterop": true /* Enables emit interoperability between CommonJS and ES Modules via creation of namespace objects for all imports. Implies 'allowSyntheticDefaultImports'. */
// "preserveSymlinks": true, /* Do not resolve the real path of symlinks. */
// "allowUmdGlobalAccess": true, /* Allow accessing UMD globals from modules. */
/* Source Map Options */
// "sourceRoot": "", /* Specify the location where debugger should locate TypeScript files instead of source locations. */
// "mapRoot": "", /* Specify the location where debugger should locate map files instead of generated locations. */
// "inlineSourceMap": true, /* Emit a single file with source maps instead of having a separate file. */
// "inlineSources": true, /* Emit the source alongside the sourcemaps within a single file; requires '--inlineSourceMap' or '--sourceMap' to be set. */
/* Experimental Options */
// "experimentalDecorators": true, /* Enables experimental support for ES7 decorators. */
// "emitDecoratorMetadata": true, /* Enables experimental support for emitting type metadata for decorators. */
},
"include": [
"src",
],
"exclude": [
"node_modules"
]
}
|
[
"As OP mentioned, the current solution is to add a file called global.d.ts to your src dir with the following contents\ndeclare module '*.scss';\n\nIn my case, I'm using rollup-plugin-lit-css to import CSS as js objects. I had to create a /typings/declarations.d.ts file,\ndeclare module '*.css' {\n import { CSSResult } from 'lit-element';\n const css: CSSResult;\n export default css;\n}\n\nand reference it in tsconfig like so:\n{\n \"include\": [ \"typings\", \"src\" ],\n \"compilerOptions\": {\n \"target\": \"ESNext\",\n \"module\": \"CommonJS\",\n \"esModuleInterop\": true,\n \"noEmit\": true,\n \"allowJs\": true,\n \"checkJs\": true\n }\n}\n\n",
"Have you just tried import \"./index.scss\"; ?\n",
"I had this same issue and created a declarations file in the root directory and then included it in the tsconfig.json.\ndeclarations.d.ts\ndeclare module \"*.scss\" {\nconst content: {[className: string]: string};\nexport = content;\n\n}\ntsconfig.json\n \"include\": [\n\"src/**/*.ts\",\n\"declarations.d.ts\"\n\n]\n"
] |
[
27,
0,
0
] |
[] |
[] |
[
"reactjs",
"sass",
"typescript"
] |
stackoverflow_0056563243_reactjs_sass_typescript.txt
|
Q:
I can't send graphql query with where condition on hasura
I have to send graphql query with using golang and Hasura. But I can't achieve that because the query I used doesn't accept where condition. The reason is that I want to send the where as a type. For example;
query MyQuery($where: popular_streamers_bool_exp!) {
popular_streamers(where: $where) {
first_name
last_name
}
}
type conditions struct {
FollowersCount struct {
Gte int `json:"_gte"`
} `json:"followers_count"`
Gender struct {
Eq string `json:"_eq,omitempty"`
} `json:"gender,omitempty"`
}
condition := conditions{}
condition.FollowersCount.Gte = 1
condition.Gender.Eq = "Male"
data, _ := json.Marshal(condition)
As you see above I have a query and where condition. But when I send the query I get an error like this;
graphql: expected an object for type 'popular_streamers_bool_exp', but found a string
How can solve this error? Thanks for your help.
A:
where in the query should be an object. You did not write where clause properly
You want to query "popular_streamers. so you are visiting the database to get some data based on a condition and you specify this condition with where`.
query MyQuery($where: popular_streamers_bool_exp!) {
// specificProperty one of the columns in the table where you want to write condition
// get me all popular_streamers where specificProperty is equal to variable $where
popular_streamers(where: {specificProperty:{_eq:$where}) {
first_name
last_name
}
}
|
I can't send graphql query with where condition on hasura
|
I have to send graphql query with using golang and Hasura. But I can't achieve that because the query I used doesn't accept where condition. The reason is that I want to send the where as a type. For example;
query MyQuery($where: popular_streamers_bool_exp!) {
popular_streamers(where: $where) {
first_name
last_name
}
}
type conditions struct {
FollowersCount struct {
Gte int `json:"_gte"`
} `json:"followers_count"`
Gender struct {
Eq string `json:"_eq,omitempty"`
} `json:"gender,omitempty"`
}
condition := conditions{}
condition.FollowersCount.Gte = 1
condition.Gender.Eq = "Male"
data, _ := json.Marshal(condition)
As you see above I have a query and where condition. But when I send the query I get an error like this;
graphql: expected an object for type 'popular_streamers_bool_exp', but found a string
How can solve this error? Thanks for your help.
|
[
"where in the query should be an object. You did not write where clause properly\nYou want to query \"popular_streamers. so you are visiting the database to get some data based on a condition and you specify this condition with where`.\nquery MyQuery($where: popular_streamers_bool_exp!) {\n // specificProperty one of the columns in the table where you want to write condition\n // get me all popular_streamers where specificProperty is equal to variable $where\n popular_streamers(where: {specificProperty:{_eq:$where}) {\n first_name\n last_name\n }\n}\n\n"
] |
[
0
] |
[] |
[] |
[
"go",
"graphql",
"hasura"
] |
stackoverflow_0074024695_go_graphql_hasura.txt
|
Q:
Can I nest a ruby embed symbol <%= %> inside another <%= %>
I want to use a helper Method inside link_to tag as follows
<%= link_to "Home", root_path, class:"nav-link <%= activeLink('home') %>" %>
This is my helper method
def activeLink(action_name)
if controller.action_name == action_name
"active"
end
end
I am getting error saying;
ActionView::SyntaxErrorInTemplate in PagesController#about
I want that helper method to check the current action name and
set the 'active' CSS class if current action matches the input action name
A:
Can you use nested <%= ... %> notation? No.
However, what you're looking for is a form of string interpolation. As mentioned in the comments Ruby variables can be converted to strings in a couple ways (full details outlined in the linked guide).
The primary method you'll see is by using #{} within a string ie "This is my string #{ruby_variable}".
Which means you could use the following:
my_string = "World!"
hello_world_string = "Hello #{my_string}"
hello_world_string
=> "Hello World!"
|
Can I nest a ruby embed symbol <%= %> inside another <%= %>
|
I want to use a helper Method inside link_to tag as follows
<%= link_to "Home", root_path, class:"nav-link <%= activeLink('home') %>" %>
This is my helper method
def activeLink(action_name)
if controller.action_name == action_name
"active"
end
end
I am getting error saying;
ActionView::SyntaxErrorInTemplate in PagesController#about
I want that helper method to check the current action name and
set the 'active' CSS class if current action matches the input action name
|
[
"Can you use nested <%= ... %> notation? No.\nHowever, what you're looking for is a form of string interpolation. As mentioned in the comments Ruby variables can be converted to strings in a couple ways (full details outlined in the linked guide).\nThe primary method you'll see is by using #{} within a string ie \"This is my string #{ruby_variable}\".\nWhich means you could use the following:\nmy_string = \"World!\"\n\nhello_world_string = \"Hello #{my_string}\"\n\nhello_world_string\n=> \"Hello World!\"\n\n"
] |
[
0
] |
[] |
[] |
[
"erb",
"ruby",
"ruby_on_rails",
"ruby_on_rails_7"
] |
stackoverflow_0074660394_erb_ruby_ruby_on_rails_ruby_on_rails_7.txt
|
Q:
Redirect popup button onclick to a custom page
We have a popup form, which is triggered by Omnisend application on Shopify - we want to redirect customers to a custom thank you page, made by us (the page is made on Shopify also). But there is just simply no options for that in Omnisend.
Can i target or grab the class / id of the button with jquery and make a custom redirection based on that? As i cannot edit the source code, so i can't create onclick function.
Would something like this work?
$('#IDhere').click(function(){
Thank you so much if you try to help me! :)
A:
It is possible to use jQuery to redirect a user to a different page when they click on a button. However, it is important to note that if you are unable to edit the source code of the page, you will not be able to add the jQuery code directly to the page.
One way to achieve the desired behavior is to use a browser extension or bookmarklet that allows you to inject custom JavaScript into the page. This would allow you to add the necessary jQuery code to the page without having to edit the source code directly.
Once you have added the jQuery code to the page, you can use the following code to redirect the user to a custom thank-you page when they click on the button with the specified ID:
$('#IDhere').click(function(){
window.location.href = 'https://your-custom-thank-you-page.com';
});
Note that you will need to replace IDhere with the actual ID of the button, and https://your-custom-thank-you-page.com with the URL of your custom thank-you page.
It is also worth mentioning that using a browser extension or bookmarklet to inject custom code into a page is generally considered to be a hacky solution, and may not always work as intended. If possible, it would be best to try to find a way to edit the source code of the page directly, or to modify the Omnisend application to allow for custom redirects.
|
Redirect popup button onclick to a custom page
|
We have a popup form, which is triggered by Omnisend application on Shopify - we want to redirect customers to a custom thank you page, made by us (the page is made on Shopify also). But there is just simply no options for that in Omnisend.
Can i target or grab the class / id of the button with jquery and make a custom redirection based on that? As i cannot edit the source code, so i can't create onclick function.
Would something like this work?
$('#IDhere').click(function(){
Thank you so much if you try to help me! :)
|
[
"It is possible to use jQuery to redirect a user to a different page when they click on a button. However, it is important to note that if you are unable to edit the source code of the page, you will not be able to add the jQuery code directly to the page.\nOne way to achieve the desired behavior is to use a browser extension or bookmarklet that allows you to inject custom JavaScript into the page. This would allow you to add the necessary jQuery code to the page without having to edit the source code directly.\nOnce you have added the jQuery code to the page, you can use the following code to redirect the user to a custom thank-you page when they click on the button with the specified ID:\n$('#IDhere').click(function(){\n window.location.href = 'https://your-custom-thank-you-page.com';\n});\n\nNote that you will need to replace IDhere with the actual ID of the button, and https://your-custom-thank-you-page.com with the URL of your custom thank-you page.\nIt is also worth mentioning that using a browser extension or bookmarklet to inject custom code into a page is generally considered to be a hacky solution, and may not always work as intended. If possible, it would be best to try to find a way to edit the source code of the page directly, or to modify the Omnisend application to allow for custom redirects.\n"
] |
[
1
] |
[] |
[] |
[
"html",
"javascript",
"jquery",
"shopify"
] |
stackoverflow_0074661861_html_javascript_jquery_shopify.txt
|
Q:
MySQL - Error while connecting to MySQL Not all parameters were used in the SQL statement
I use MySQL connector (Python3) and I would Like to upload in a existing Table values of an CSV just one Column. I created a new column in the DB with:
ALTER TABLE myTable ADD `TEST` TEXT;
Now I created a python Query what is the Problem there?
#stvk is my dataframe
for i,row in stvk_u.iterrows():
print(row["datas_of_other_csv"])
cursor.execute("INSERT INTO myTable (TEST) VALUES(%s)",tuple(row["datas_of_other_csv"]))
But I get the error:
Error while connecting to MySQL Not all parameters were used in the SQL statement
Can I do not just insert into a existing table? I do not see what is wrong.
Thanks in advance
A:
The statement requires exactly one parameter, while you provide > 1 parameter.
tuple function splits a string and returns a tuple of characters:
$ python3 -c "print(tuple('foo'))"
('f', 'o', 'o')
Correct would be:
cursor.execute(statement, (row["datas_of_other_csv"],)) # note the comma at the end of tuple
|
MySQL - Error while connecting to MySQL Not all parameters were used in the SQL statement
|
I use MySQL connector (Python3) and I would Like to upload in a existing Table values of an CSV just one Column. I created a new column in the DB with:
ALTER TABLE myTable ADD `TEST` TEXT;
Now I created a python Query what is the Problem there?
#stvk is my dataframe
for i,row in stvk_u.iterrows():
print(row["datas_of_other_csv"])
cursor.execute("INSERT INTO myTable (TEST) VALUES(%s)",tuple(row["datas_of_other_csv"]))
But I get the error:
Error while connecting to MySQL Not all parameters were used in the SQL statement
Can I do not just insert into a existing table? I do not see what is wrong.
Thanks in advance
|
[
"The statement requires exactly one parameter, while you provide > 1 parameter.\ntuple function splits a string and returns a tuple of characters:\n$ python3 -c \"print(tuple('foo'))\"\n('f', 'o', 'o')\n\nCorrect would be:\ncursor.execute(statement, (row[\"datas_of_other_csv\"],)) # note the comma at the end of tuple\n\n"
] |
[
1
] |
[] |
[] |
[
"mysql",
"mysql_connector",
"python_3.x"
] |
stackoverflow_0074660601_mysql_mysql_connector_python_3.x.txt
|
Q:
I want to read this csv file with pandas and display the first 5 records but I keep getting this error
I keep getting an error when i use df.head() on my dataframe I read in.
When I read in my CSV file and attempt to display The first 5 records, I use these lines
df=pd.read_csv('US_Accidents_Dec21.csv')
df.head()
But I Get the following error and I want to know how to fix it.
File ~\anaconda3\lib\site-packages\IPython\core\formatters.py:707, in PlainTextFormatter.__call__(self, obj)
700 stream = StringIO()
701 printer = pretty.RepresentationPrinter(stream, self.verbose,
702 self.max_width, self.newline,
703 max_seq_length=self.max_seq_length,
704 singleton_pprinters=self.singleton_printers,
705 type_pprinters=self.type_printers,
706 deferred_pprinters=self.deferred_printers)
--> 707 printer.pretty(obj)
708 printer.flush()
709 return stream.getvalue()
File ~\anaconda3\lib\site-packages\IPython\lib\pretty.py:410, in RepresentationPrinter.pretty(self, obj)
407 return meth(obj, self, cycle)
408 if cls is not object \
409 and callable(cls.__dict__.get('__repr__')):
--> 410 return _repr_pprint(obj, self, cycle)
412 return _default_pprint(obj, self, cycle)
413 finally:
File ~\anaconda3\lib\site-packages\IPython\lib\pretty.py:778, in _repr_pprint(obj, p, cycle)
776 """A pprint that just redirects to the normal repr function."""
777 # Find newlines and replace them with p.break_()
--> 778 output = repr(obj)
779 lines = output.splitlines()
780 with p.group():
File ~\anaconda3\lib\site-packages\pandas\core\frame.py:1011, in DataFrame.__repr__(self)
1008 return buf.getvalue()
1010 repr_params = fmt.get_dataframe_repr_params()
-> 1011 return self.to_string(**repr_params)
File ~\anaconda3\lib\site-packages\pandas\core\frame.py:1192, in DataFrame.to_string(self, buf, columns, col_space, header, index, na_rep, formatters, float_format, sparsify, index_names, justify, max_rows, max_cols, show_dimensions, decimal, line_width, min_rows, max_colwidth, encoding)
1173 with option_context("display.max_colwidth", max_colwidth):
1174 formatter = fmt.DataFrameFormatter(
1175 self,
1176 columns=columns,
(...)
1190 decimal=decimal,
1191 )
-> 1192 return fmt.DataFrameRenderer(formatter).to_string(
1193 buf=buf,
1194 encoding=encoding,
1195 line_width=line_width,
1196 )
File ~\anaconda3\lib\site-packages\pandas\io\formats\format.py:1128, in DataFrameRenderer.to_string(self, buf, encoding, line_width)
1125 from pandas.io.formats.string import StringFormatter
1127 string_formatter = StringFormatter(self.fmt, line_width=line_width)
-> 1128 string = string_formatter.to_string()
1129 return save_to_buffer(string, buf=buf, encoding=encoding)
File ~\anaconda3\lib\site-packages\pandas\io\formats\string.py:25, in StringFormatter.to_string(self)
24 def to_string(self) -> str:
---> 25 text = self._get_string_representation()
26 if self.fmt.should_show_dimensions:
27 text = "".join([text, self.fmt.dimensions_info])
File ~\anaconda3\lib\site-packages\pandas\io\formats\string.py:40, in StringFormatter._get_string_representation(self)
37 if self.fmt.frame.empty:
38 return self._empty_info_line
---> 40 strcols = self._get_strcols()
42 if self.line_width is None:
43 # no need to wrap around just print the whole frame
44 return self.adj.adjoin(1, *strcols)
File ~\anaconda3\lib\site-packages\pandas\io\formats\string.py:31, in StringFormatter._get_strcols(self)
30 def _get_strcols(self) -> list[list[str]]:
---> 31 strcols = self.fmt.get_strcols()
32 if self.fmt.is_truncated:
33 strcols = self._insert_dot_separators(strcols)
File ~\anaconda3\lib\site-packages\pandas\io\formats\format.py:611, in DataFrameFormatter.get_strcols(self)
607 def get_strcols(self) -> list[list[str]]:
608 """
609 Render a DataFrame to a list of columns (as lists of strings).
610 """
--> 611 strcols = self._get_strcols_without_index()
613 if self.index:
614 str_index = self._get_formatted_index(self.tr_frame)
File ~\anaconda3\lib\site-packages\pandas\io\formats\format.py:875, in DataFrameFormatter._get_strcols_without_index(self)
871 cheader = str_columns[i]
872 header_colwidth = max(
873 int(self.col_space.get(c, 0)), *(self.adj.len(x) for x in cheader)
874 )
--> 875 fmt_values = self.format_col(i)
876 fmt_values = _make_fixed_width(
877 fmt_values, self.justify, minimum=header_colwidth, adj=self.adj
878 )
880 max_len = max(max(self.adj.len(x) for x in fmt_values), header_colwidth)
File ~\anaconda3\lib\site-packages\pandas\io\formats\format.py:889, in DataFrameFormatter.format_col(self, i)
887 frame = self.tr_frame
888 formatter = self._get_formatter(i)
--> 889 return format_array(
890 frame.iloc[:, i]._values,
891 formatter,
892 float_format=self.float_format,
893 na_rep=self.na_rep,
894 space=self.col_space.get(frame.columns[i]),
895 decimal=self.decimal,
896 leading_space=self.index,
897 )
File ~\anaconda3\lib\site-packages\pandas\io\formats\format.py:1316, in format_array(values, formatter, float_format, na_rep, digits, space, justify, decimal, leading_space, quoting)
1301 digits = get_option("display.precision")
1303 fmt_obj = fmt_klass(
1304 values,
1305 digits=digits,
(...)
1313 quoting=quoting,
1314 )
-> 1316 return fmt_obj.get_result()
File ~\anaconda3\lib\site-packages\pandas\io\formats\format.py:1347, in GenericArrayFormatter.get_result(self)
1346 def get_result(self) -> list[str]:
-> 1347 fmt_values = self._format_strings()
1348 return _make_fixed_width(fmt_values, self.justify)
File ~\anaconda3\lib\site-packages\pandas\io\formats\format.py:1594, in FloatArrayFormatter._format_strings(self)
1593 def _format_strings(self) -> list[str]:
-> 1594 return list(self.get_result_as_array())
File ~\anaconda3\lib\site-packages\pandas\io\formats\format.py:1511, in FloatArrayFormatter.get_result_as_array(self)
1508 return formatted
1510 if self.formatter is not None:
-> 1511 return format_with_na_rep(self.values, self.formatter, self.na_rep)
1513 if self.fixed_width:
1514 threshold = get_option("display.chop_threshold")
File ~\anaconda3\lib\site-packages\pandas\io\formats\format.py:1503, in FloatArrayFormatter.get_result_as_array.<locals>.format_with_na_rep(values, formatter, na_rep)
1500 def format_with_na_rep(values: ArrayLike, formatter: Callable, na_rep: str):
1501 mask = isna(values)
1502 formatted = np.array(
-> 1503 [
1504 formatter(val) if not m else na_rep
1505 for val, m in zip(values.ravel(), mask.ravel())
1506 ]
1507 ).reshape(values.shape)
1508 return formatted
File ~\anaconda3\lib\site-packages\pandas\io\formats\format.py:1504, in <listcomp>(.0)
1500 def format_with_na_rep(values: ArrayLike, formatter: Callable, na_rep: str):
1501 mask = isna(values)
1502 formatted = np.array(
1503 [
-> 1504 formatter(val) if not m else na_rep
1505 for val, m in zip(values.ravel(), mask.ravel())
1506 ]
1507 ).reshape(values.shape)
1508 return formatted
KeyError: ';,'
Its a lot to paste here and I dont know exactly what to detail because Im a beginner with using Python.
A:
The error message gives the following exception: KeyError: ';,'.
I suggest verifying that your CSV-file doesn't contain any errors first. Are you able to open it in e.g. Excel? If yes: are you using the correct separator and delimiter? (See the sep and delimiter parameters in the documentation)
|
I want to read this csv file with pandas and display the first 5 records but I keep getting this error
|
I keep getting an error when i use df.head() on my dataframe I read in.
When I read in my CSV file and attempt to display The first 5 records, I use these lines
df=pd.read_csv('US_Accidents_Dec21.csv')
df.head()
But I Get the following error and I want to know how to fix it.
File ~\anaconda3\lib\site-packages\IPython\core\formatters.py:707, in PlainTextFormatter.__call__(self, obj)
700 stream = StringIO()
701 printer = pretty.RepresentationPrinter(stream, self.verbose,
702 self.max_width, self.newline,
703 max_seq_length=self.max_seq_length,
704 singleton_pprinters=self.singleton_printers,
705 type_pprinters=self.type_printers,
706 deferred_pprinters=self.deferred_printers)
--> 707 printer.pretty(obj)
708 printer.flush()
709 return stream.getvalue()
File ~\anaconda3\lib\site-packages\IPython\lib\pretty.py:410, in RepresentationPrinter.pretty(self, obj)
407 return meth(obj, self, cycle)
408 if cls is not object \
409 and callable(cls.__dict__.get('__repr__')):
--> 410 return _repr_pprint(obj, self, cycle)
412 return _default_pprint(obj, self, cycle)
413 finally:
File ~\anaconda3\lib\site-packages\IPython\lib\pretty.py:778, in _repr_pprint(obj, p, cycle)
776 """A pprint that just redirects to the normal repr function."""
777 # Find newlines and replace them with p.break_()
--> 778 output = repr(obj)
779 lines = output.splitlines()
780 with p.group():
File ~\anaconda3\lib\site-packages\pandas\core\frame.py:1011, in DataFrame.__repr__(self)
1008 return buf.getvalue()
1010 repr_params = fmt.get_dataframe_repr_params()
-> 1011 return self.to_string(**repr_params)
File ~\anaconda3\lib\site-packages\pandas\core\frame.py:1192, in DataFrame.to_string(self, buf, columns, col_space, header, index, na_rep, formatters, float_format, sparsify, index_names, justify, max_rows, max_cols, show_dimensions, decimal, line_width, min_rows, max_colwidth, encoding)
1173 with option_context("display.max_colwidth", max_colwidth):
1174 formatter = fmt.DataFrameFormatter(
1175 self,
1176 columns=columns,
(...)
1190 decimal=decimal,
1191 )
-> 1192 return fmt.DataFrameRenderer(formatter).to_string(
1193 buf=buf,
1194 encoding=encoding,
1195 line_width=line_width,
1196 )
File ~\anaconda3\lib\site-packages\pandas\io\formats\format.py:1128, in DataFrameRenderer.to_string(self, buf, encoding, line_width)
1125 from pandas.io.formats.string import StringFormatter
1127 string_formatter = StringFormatter(self.fmt, line_width=line_width)
-> 1128 string = string_formatter.to_string()
1129 return save_to_buffer(string, buf=buf, encoding=encoding)
File ~\anaconda3\lib\site-packages\pandas\io\formats\string.py:25, in StringFormatter.to_string(self)
24 def to_string(self) -> str:
---> 25 text = self._get_string_representation()
26 if self.fmt.should_show_dimensions:
27 text = "".join([text, self.fmt.dimensions_info])
File ~\anaconda3\lib\site-packages\pandas\io\formats\string.py:40, in StringFormatter._get_string_representation(self)
37 if self.fmt.frame.empty:
38 return self._empty_info_line
---> 40 strcols = self._get_strcols()
42 if self.line_width is None:
43 # no need to wrap around just print the whole frame
44 return self.adj.adjoin(1, *strcols)
File ~\anaconda3\lib\site-packages\pandas\io\formats\string.py:31, in StringFormatter._get_strcols(self)
30 def _get_strcols(self) -> list[list[str]]:
---> 31 strcols = self.fmt.get_strcols()
32 if self.fmt.is_truncated:
33 strcols = self._insert_dot_separators(strcols)
File ~\anaconda3\lib\site-packages\pandas\io\formats\format.py:611, in DataFrameFormatter.get_strcols(self)
607 def get_strcols(self) -> list[list[str]]:
608 """
609 Render a DataFrame to a list of columns (as lists of strings).
610 """
--> 611 strcols = self._get_strcols_without_index()
613 if self.index:
614 str_index = self._get_formatted_index(self.tr_frame)
File ~\anaconda3\lib\site-packages\pandas\io\formats\format.py:875, in DataFrameFormatter._get_strcols_without_index(self)
871 cheader = str_columns[i]
872 header_colwidth = max(
873 int(self.col_space.get(c, 0)), *(self.adj.len(x) for x in cheader)
874 )
--> 875 fmt_values = self.format_col(i)
876 fmt_values = _make_fixed_width(
877 fmt_values, self.justify, minimum=header_colwidth, adj=self.adj
878 )
880 max_len = max(max(self.adj.len(x) for x in fmt_values), header_colwidth)
File ~\anaconda3\lib\site-packages\pandas\io\formats\format.py:889, in DataFrameFormatter.format_col(self, i)
887 frame = self.tr_frame
888 formatter = self._get_formatter(i)
--> 889 return format_array(
890 frame.iloc[:, i]._values,
891 formatter,
892 float_format=self.float_format,
893 na_rep=self.na_rep,
894 space=self.col_space.get(frame.columns[i]),
895 decimal=self.decimal,
896 leading_space=self.index,
897 )
File ~\anaconda3\lib\site-packages\pandas\io\formats\format.py:1316, in format_array(values, formatter, float_format, na_rep, digits, space, justify, decimal, leading_space, quoting)
1301 digits = get_option("display.precision")
1303 fmt_obj = fmt_klass(
1304 values,
1305 digits=digits,
(...)
1313 quoting=quoting,
1314 )
-> 1316 return fmt_obj.get_result()
File ~\anaconda3\lib\site-packages\pandas\io\formats\format.py:1347, in GenericArrayFormatter.get_result(self)
1346 def get_result(self) -> list[str]:
-> 1347 fmt_values = self._format_strings()
1348 return _make_fixed_width(fmt_values, self.justify)
File ~\anaconda3\lib\site-packages\pandas\io\formats\format.py:1594, in FloatArrayFormatter._format_strings(self)
1593 def _format_strings(self) -> list[str]:
-> 1594 return list(self.get_result_as_array())
File ~\anaconda3\lib\site-packages\pandas\io\formats\format.py:1511, in FloatArrayFormatter.get_result_as_array(self)
1508 return formatted
1510 if self.formatter is not None:
-> 1511 return format_with_na_rep(self.values, self.formatter, self.na_rep)
1513 if self.fixed_width:
1514 threshold = get_option("display.chop_threshold")
File ~\anaconda3\lib\site-packages\pandas\io\formats\format.py:1503, in FloatArrayFormatter.get_result_as_array.<locals>.format_with_na_rep(values, formatter, na_rep)
1500 def format_with_na_rep(values: ArrayLike, formatter: Callable, na_rep: str):
1501 mask = isna(values)
1502 formatted = np.array(
-> 1503 [
1504 formatter(val) if not m else na_rep
1505 for val, m in zip(values.ravel(), mask.ravel())
1506 ]
1507 ).reshape(values.shape)
1508 return formatted
File ~\anaconda3\lib\site-packages\pandas\io\formats\format.py:1504, in <listcomp>(.0)
1500 def format_with_na_rep(values: ArrayLike, formatter: Callable, na_rep: str):
1501 mask = isna(values)
1502 formatted = np.array(
1503 [
-> 1504 formatter(val) if not m else na_rep
1505 for val, m in zip(values.ravel(), mask.ravel())
1506 ]
1507 ).reshape(values.shape)
1508 return formatted
KeyError: ';,'
Its a lot to paste here and I dont know exactly what to detail because Im a beginner with using Python.
|
[
"The error message gives the following exception: KeyError: ';,'.\nI suggest verifying that your CSV-file doesn't contain any errors first. Are you able to open it in e.g. Excel? If yes: are you using the correct separator and delimiter? (See the sep and delimiter parameters in the documentation)\n"
] |
[
0
] |
[] |
[] |
[
"pandas",
"python"
] |
stackoverflow_0074661778_pandas_python.txt
|
Q:
What is the difference between a = ListNode(-1), b = ListNode(-1) and a = b = ListNode(-1) in python
In python:
what's the difference between a = ListNode(-1), b = ListNode(-1) and a = b = ListNode(-1)
A:
In python, when you create a new instance of an object, you are creating a new object in memory. For example:
a = ListNode(-1)
b = ListNode(-1)
In this case, you are creating two separate ListNode objects, a and b, which are stored in different locations in memory.
On the other hand, when you use the assignment operator = in the following way:
a = b = ListNode(-1)
You are creating a single ListNode object and assigning it to two different variables, a and b. This means that a and b will both reference the same object in memory. Any changes made to the object through either a or b will be reflected in the other variable.
In other words, the difference between the two examples is that in the first case, you are creating two separate objects, while in the second case, you are creating a single object and assigning it to multiple variables.
|
What is the difference between a = ListNode(-1), b = ListNode(-1) and a = b = ListNode(-1) in python
|
In python:
what's the difference between a = ListNode(-1), b = ListNode(-1) and a = b = ListNode(-1)
|
[
"In python, when you create a new instance of an object, you are creating a new object in memory. For example:\na = ListNode(-1)\nb = ListNode(-1)\n\nIn this case, you are creating two separate ListNode objects, a and b, which are stored in different locations in memory.\nOn the other hand, when you use the assignment operator = in the following way:\na = b = ListNode(-1)\n\nYou are creating a single ListNode object and assigning it to two different variables, a and b. This means that a and b will both reference the same object in memory. Any changes made to the object through either a or b will be reflected in the other variable.\nIn other words, the difference between the two examples is that in the first case, you are creating two separate objects, while in the second case, you are creating a single object and assigning it to multiple variables.\n"
] |
[
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074661831_python.txt
|
Q:
how can I specify the length/width of the border under li
I want a border under the li, but the border is taking up the full width. Is there a way to add a border bottom which doesn't takes the full width of li
.aboutusNav a {
text-decoration: none;
font-weight: normal;
color: black
}
.aboutusNav {
border-bottom: 3px solid transparent;
display: inline-block;
}
.aboutusNav:hover {
border-bottom: 3px solid #EB644C;
}
<ul>
<li class="aboutusNav"><a>AboutUs</a></li>
</ul>
A:
Perhaps try use a pseudo element ::before or ::after as the border, style it as needed and add a transition.
More about pseudo elements
This example uses ::before just so that if ::after is also added later for another effect, its natural position is stacked on top of ::before. But it seems that using either ::before or ::after would not make a significant difference in this use case alone.
Example:
.aboutusNav a {
text-decoration: none;
font-weight: normal;
color: black;
}
.aboutusNav {
display: inline-block;
position: relative;
cursor: pointer;
}
.aboutusNav::before {
content: "";
position: absolute;
left: 0;
right: 0;
bottom: -3px;
height: 3px;
background-color: #eb644c;
transform-origin: center;
transform: scaleX(0);
transition: all 0.2s ease-in-out;
}
.aboutusNav:hover::before {
transform: scaleX(0.5);
}
<ul>
<li class="aboutusNav"><a>AboutUs</a></li>
</ul>
|
how can I specify the length/width of the border under li
|
I want a border under the li, but the border is taking up the full width. Is there a way to add a border bottom which doesn't takes the full width of li
.aboutusNav a {
text-decoration: none;
font-weight: normal;
color: black
}
.aboutusNav {
border-bottom: 3px solid transparent;
display: inline-block;
}
.aboutusNav:hover {
border-bottom: 3px solid #EB644C;
}
<ul>
<li class="aboutusNav"><a>AboutUs</a></li>
</ul>
|
[
"Perhaps try use a pseudo element ::before or ::after as the border, style it as needed and add a transition.\nMore about pseudo elements\nThis example uses ::before just so that if ::after is also added later for another effect, its natural position is stacked on top of ::before. But it seems that using either ::before or ::after would not make a significant difference in this use case alone.\nExample:\n\n\n.aboutusNav a {\n text-decoration: none;\n font-weight: normal;\n color: black;\n}\n\n.aboutusNav {\n display: inline-block;\n position: relative;\n cursor: pointer;\n}\n\n.aboutusNav::before {\n content: \"\";\n position: absolute;\n left: 0;\n right: 0;\n bottom: -3px;\n height: 3px;\n background-color: #eb644c;\n transform-origin: center;\n transform: scaleX(0);\n transition: all 0.2s ease-in-out;\n}\n\n.aboutusNav:hover::before {\n transform: scaleX(0.5);\n}\n<ul>\n <li class=\"aboutusNav\"><a>AboutUs</a></li>\n</ul>\n\n\n\n"
] |
[
2
] |
[] |
[] |
[
"border",
"css",
"hover",
"html"
] |
stackoverflow_0074661771_border_css_hover_html.txt
|
Q:
Android - Show full screen notification when app is in background
I'd like to show a full screen notification when my app is receiving an incoming call.
Basically the same as WhatsApp, ie. when the phone received a notification and the app is not started, I want to show a view (activity, notification, whatever you want to call it) that is taking the full size of the screen.
Everything is working except one thing: the "view" is never shown in full screen, only as a classic notification.
Here's my code, pretty straightforward.
Can someone pinpoint any issue in it? Am I missing some kind of permission or something?
Any help would be welcomed.
Intent incomingCallDialog = new Intent(context, IncomingCallActivity.class);
incomingCallDialog.addFlags(Intent.FLAG_ACTIVITY_NEW_TASK);
incomingCallDialog.addFlags(Intent.FLAG_ACTIVITY_CLEAR_TOP);
if (context.getApplication() != null && ((MyApplication) context.getApplication()).getCurrentActivity() == null) {
PendingIntent fullScreenPendingIntent = PendingIntent.getActivity(context, 0,
incomingCallDialog, PendingIntent.FLAG_UPDATE_CURRENT | PendingIntent.FLAG_IMMUTABLE);
NotificationCompat.Builder notificationBuilder =
new NotificationCompat.Builder(context);
notificationBuilder.setSmallIcon(R.mipmap.ic_launcher);
notificationBuilder.setContentTitle(context.getString(R.string.incoming_call_from, email));
notificationBuilder.setLargeIcon(BitmapFactory.decodeResource(context.getResources(), R.mipmap.ic_launcher));
notificationBuilder.setContentText(caller.getName());
notificationBuilder.setPriority(NotificationCompat.PRIORITY_MAX);
notificationBuilder.setCategory(NotificationCompat.CATEGORY_CALL);
notificationBuilder.setFullScreenIntent(fullScreenPendingIntent, true);
notificationBuilder.setContentIntent(fullScreenPendingIntent);
notificationBuilder.setTimeoutAfter(30 * 1000);
notificationBuilder.setAutoCancel(true);
notificationBuilder.setWhen(0);
NotificationManager manager = (NotificationManager) context.getSystemService(NOTIFICATION_SERVICE);
String channelId = "my_channel";
NotificationChannel channel = new NotificationChannel(
channelId,
"channelname",
NotificationManager.IMPORTANCE_HIGH);
channel.setLockscreenVisibility(Notification.VISIBILITY_PUBLIC);
manager.createNotificationChannel(channel);
notificationBuilder.setChannelId(channelId);
manager.cancel(NOTIFICATION_CALL_ID);
manager.notify(NOTIFICATION_CALL_ID, notificationBuilder.build());
} else {
context.startActivity(incomingCallDialog);
}
A:
You will have to request a permission Manifest.permission.USE_FULL_SCREEN_INTENT in order to use full screen intents.
Also, If your notification is an ongoing one, such as an incoming phone call, associate the notification with a foreground service. The following code snippet shows how to display a notification that's associated with a foreground service:
// Provide a unique integer for the "notificationId" of each notification.
startForeground(notificationId, notification);
Note: The system UI may choose to display a heads-up notification, instead of launching your full-screen intent, while the user is using the device.
|
Android - Show full screen notification when app is in background
|
I'd like to show a full screen notification when my app is receiving an incoming call.
Basically the same as WhatsApp, ie. when the phone received a notification and the app is not started, I want to show a view (activity, notification, whatever you want to call it) that is taking the full size of the screen.
Everything is working except one thing: the "view" is never shown in full screen, only as a classic notification.
Here's my code, pretty straightforward.
Can someone pinpoint any issue in it? Am I missing some kind of permission or something?
Any help would be welcomed.
Intent incomingCallDialog = new Intent(context, IncomingCallActivity.class);
incomingCallDialog.addFlags(Intent.FLAG_ACTIVITY_NEW_TASK);
incomingCallDialog.addFlags(Intent.FLAG_ACTIVITY_CLEAR_TOP);
if (context.getApplication() != null && ((MyApplication) context.getApplication()).getCurrentActivity() == null) {
PendingIntent fullScreenPendingIntent = PendingIntent.getActivity(context, 0,
incomingCallDialog, PendingIntent.FLAG_UPDATE_CURRENT | PendingIntent.FLAG_IMMUTABLE);
NotificationCompat.Builder notificationBuilder =
new NotificationCompat.Builder(context);
notificationBuilder.setSmallIcon(R.mipmap.ic_launcher);
notificationBuilder.setContentTitle(context.getString(R.string.incoming_call_from, email));
notificationBuilder.setLargeIcon(BitmapFactory.decodeResource(context.getResources(), R.mipmap.ic_launcher));
notificationBuilder.setContentText(caller.getName());
notificationBuilder.setPriority(NotificationCompat.PRIORITY_MAX);
notificationBuilder.setCategory(NotificationCompat.CATEGORY_CALL);
notificationBuilder.setFullScreenIntent(fullScreenPendingIntent, true);
notificationBuilder.setContentIntent(fullScreenPendingIntent);
notificationBuilder.setTimeoutAfter(30 * 1000);
notificationBuilder.setAutoCancel(true);
notificationBuilder.setWhen(0);
NotificationManager manager = (NotificationManager) context.getSystemService(NOTIFICATION_SERVICE);
String channelId = "my_channel";
NotificationChannel channel = new NotificationChannel(
channelId,
"channelname",
NotificationManager.IMPORTANCE_HIGH);
channel.setLockscreenVisibility(Notification.VISIBILITY_PUBLIC);
manager.createNotificationChannel(channel);
notificationBuilder.setChannelId(channelId);
manager.cancel(NOTIFICATION_CALL_ID);
manager.notify(NOTIFICATION_CALL_ID, notificationBuilder.build());
} else {
context.startActivity(incomingCallDialog);
}
|
[
"You will have to request a permission Manifest.permission.USE_FULL_SCREEN_INTENT in order to use full screen intents.\nAlso, If your notification is an ongoing one, such as an incoming phone call, associate the notification with a foreground service. The following code snippet shows how to display a notification that's associated with a foreground service:\n// Provide a unique integer for the \"notificationId\" of each notification.\nstartForeground(notificationId, notification);\n\n\nNote: The system UI may choose to display a heads-up notification, instead of launching your full-screen intent, while the user is using the device.\n\n"
] |
[
1
] |
[] |
[] |
[
"android",
"android_notifications",
"notifications",
"push_notification"
] |
stackoverflow_0074613041_android_android_notifications_notifications_push_notification.txt
|
Q:
In Google Cloud's Logs Explorer, how do you query for a key's existence in the jsonPayload dict?
In Google's Cloud Logging query language, is it possible to query for the existence of a particular key in the jsonPayload dict?
E.g., suppose I know the jsonPayload will either be
{'keyA':'<some string>'}
or
{'keyB':'<some string'}
But I don't know what the <some string> will be. I want all logs that have the keyB key. I suppose I could test that for that key having a regex that includes everything, but is that the best/only way?
A:
I want all logs that have the keyB key. I suppose I could test that for that key having a regex that includes everything, but is that the best/only way?
I could think of using Regex for this use case:
Example:
jsonPayload.message =~"Job status: *"
In your case keyB
jsonPayload.keyB =~"regex-query"
For my example, when the expression matches, getting output as below:
A:
Use the "has" operator with a *, like
jsonPayload.keyB:*
See the end of https://cloud.google.com/logging/docs/view/logging-query-language#comparisons
|
In Google Cloud's Logs Explorer, how do you query for a key's existence in the jsonPayload dict?
|
In Google's Cloud Logging query language, is it possible to query for the existence of a particular key in the jsonPayload dict?
E.g., suppose I know the jsonPayload will either be
{'keyA':'<some string>'}
or
{'keyB':'<some string'}
But I don't know what the <some string> will be. I want all logs that have the keyB key. I suppose I could test that for that key having a regex that includes everything, but is that the best/only way?
|
[
"\nI want all logs that have the keyB key. I suppose I could test that for that key having a regex that includes everything, but is that the best/only way?\n\nI could think of using Regex for this use case:\nExample:\njsonPayload.message =~\"Job status: *\" \n\nIn your case keyB\njsonPayload.keyB =~\"regex-query\"\n\nFor my example, when the expression matches, getting output as below:\n\n",
"Use the \"has\" operator with a *, like\njsonPayload.keyB:*\nSee the end of https://cloud.google.com/logging/docs/view/logging-query-language#comparisons\n"
] |
[
1,
0
] |
[] |
[] |
[
"google_cloud_logging"
] |
stackoverflow_0074608203_google_cloud_logging.txt
|
Q:
Cumulative sum of a pandas dataframe column without for loop?
I have a pandas dataframe df.
There's a column "a". I need to compute a column "b" which is a cumulative sum of "a" with an offset of 1 row.
So it's something like
df["b"][0] = 0
for i in len(df["a"]) - 1:
df["b"][i + 1] = df["b"][i] + df["a"][i]
I am wondering if there's a built in function that will allow me to this without the for loop?
Here's an example with numbers:
df = {'a': [1, 2, 3, 4]}
After the above algorithm we should end up with
df = {'a': [1, 2, 3, 4], 'b': [0, 1, 3, 6]}
A:
You can use pandas.Series.cumsum with pandas.Series.shift :
import pandas as pd
df = pd.DataFrame({'a': [1, 2, 3, 4]})
df["b"] = df["a"].cumsum().shift(periods=1).fillna(0).astype(int)
# Output :
print(df)
a b
0 1 0
1 2 1
2 3 3
3 4 6
A:
IIUC, you just need:
df["b"] = df["a"].shift(fill_value=0).cumsum()
print(df):
a b
0 1 0
1 2 1
2 3 3
3 4 6
|
Cumulative sum of a pandas dataframe column without for loop?
|
I have a pandas dataframe df.
There's a column "a". I need to compute a column "b" which is a cumulative sum of "a" with an offset of 1 row.
So it's something like
df["b"][0] = 0
for i in len(df["a"]) - 1:
df["b"][i + 1] = df["b"][i] + df["a"][i]
I am wondering if there's a built in function that will allow me to this without the for loop?
Here's an example with numbers:
df = {'a': [1, 2, 3, 4]}
After the above algorithm we should end up with
df = {'a': [1, 2, 3, 4], 'b': [0, 1, 3, 6]}
|
[
"You can use pandas.Series.cumsum with pandas.Series.shift :\nimport pandas as pd\n\ndf = pd.DataFrame({'a': [1, 2, 3, 4]})\n\ndf[\"b\"] = df[\"a\"].cumsum().shift(periods=1).fillna(0).astype(int)\n\n# Output :\nprint(df)\n\n a b\n0 1 0\n1 2 1\n2 3 3\n3 4 6\n\n",
"IIUC, you just need:\ndf[\"b\"] = df[\"a\"].shift(fill_value=0).cumsum()\n\nprint(df):\n a b\n0 1 0\n1 2 1\n2 3 3\n3 4 6\n\n"
] |
[
1,
1
] |
[] |
[] |
[
"dataframe",
"pandas"
] |
stackoverflow_0074661605_dataframe_pandas.txt
|
Q:
Flask server timeouts at 30 sec with gunicorn
Here is the minimal example of the code. curl request curl http://127.0.0.1:5000/get_zip/my_zip.zip -o my_zip.zip should send user the file archive/my_zip.zip
It works correctly without gunicorn and disconnects after 30 seconds, when the server is launched with gunicorn.
from os import path
from flask import Flask, request, jsonify, json, send_file
app = Flask(__name__)
@app.route('/get_zip/<file_path>', methods=['GET'])
def get_zip(file_path): # file_path: path to a large zip file
return send_file(path.join('archive', file_path), as_attachment=True)
if __name__ == '__main__':
app.run(host="0.0.0.0", port="5000", debug=False, use_reloader=False)
What is the correct way to fix this disconnect when ran under gunicorn?
A:
30 seconds is default timeout value for gunicorn.
To increase it use --timeout <seconds> parameter on your gunicorn config.
Also if you run gunicorn under nginx, don't forget to manage nginx's settings:
proxy_connect_timeout <seconds>s;
proxy_read_timeout <seconds>s;
UPDATE:
it's better and safer to send files from flask by using send_from_directory
|
Flask server timeouts at 30 sec with gunicorn
|
Here is the minimal example of the code. curl request curl http://127.0.0.1:5000/get_zip/my_zip.zip -o my_zip.zip should send user the file archive/my_zip.zip
It works correctly without gunicorn and disconnects after 30 seconds, when the server is launched with gunicorn.
from os import path
from flask import Flask, request, jsonify, json, send_file
app = Flask(__name__)
@app.route('/get_zip/<file_path>', methods=['GET'])
def get_zip(file_path): # file_path: path to a large zip file
return send_file(path.join('archive', file_path), as_attachment=True)
if __name__ == '__main__':
app.run(host="0.0.0.0", port="5000", debug=False, use_reloader=False)
What is the correct way to fix this disconnect when ran under gunicorn?
|
[
"30 seconds is default timeout value for gunicorn.\nTo increase it use --timeout <seconds> parameter on your gunicorn config.\nAlso if you run gunicorn under nginx, don't forget to manage nginx's settings:\nproxy_connect_timeout <seconds>s;\nproxy_read_timeout <seconds>s;\n\nUPDATE:\nit's better and safer to send files from flask by using send_from_directory\n"
] |
[
3
] |
[] |
[] |
[
"flask",
"gunicorn",
"python"
] |
stackoverflow_0074661482_flask_gunicorn_python.txt
|
Q:
Tag structures in Rust
In C++ I can use a template parameter as a tag, to make identical but otherwise unrelated datatypes::
template<typename T>
struct UniqueId
{
int Value;
};
struct CustomerTag{};
struct BookTag{};
using BookId = UniqueId<BookTag>;
using CustomerId = UniqueId<CustomerTag>;
I can do the same thing in Rust, but run into problems because my type starts acting like it owns a T, which it does not. So now, in order to make my type Clone, Send, etc. my tags must also be Clone, Send, etc.. This is a little odd since my type doesn't really own the T, just uses it as a parameter. Is there any way around this? The documentation seems to suggest PhantomData<*const T> will fix this problem, but it doesn't seem to because then I just get *const BookTag cannot be sent between threads safely errors instead of BookTag cannot be sent between threads safely errors.
struct UniqueId<T>
{
Value : i32,
Phantom : PhantomData<*const T>
}
A:
Use PhantomData<fn(T) -> T>, which is invariant over T and always Copy, Clone, Send, and Sync. The only downside is that you will have to write manual Copy and Clone implementations for your UniqueId<T> struct, because the derive macros currently always generate a T: Copy/T: Clone bound, even when it is unnecessary (see this issue).
Playground example
|
Tag structures in Rust
|
In C++ I can use a template parameter as a tag, to make identical but otherwise unrelated datatypes::
template<typename T>
struct UniqueId
{
int Value;
};
struct CustomerTag{};
struct BookTag{};
using BookId = UniqueId<BookTag>;
using CustomerId = UniqueId<CustomerTag>;
I can do the same thing in Rust, but run into problems because my type starts acting like it owns a T, which it does not. So now, in order to make my type Clone, Send, etc. my tags must also be Clone, Send, etc.. This is a little odd since my type doesn't really own the T, just uses it as a parameter. Is there any way around this? The documentation seems to suggest PhantomData<*const T> will fix this problem, but it doesn't seem to because then I just get *const BookTag cannot be sent between threads safely errors instead of BookTag cannot be sent between threads safely errors.
struct UniqueId<T>
{
Value : i32,
Phantom : PhantomData<*const T>
}
|
[
"Use PhantomData<fn(T) -> T>, which is invariant over T and always Copy, Clone, Send, and Sync. The only downside is that you will have to write manual Copy and Clone implementations for your UniqueId<T> struct, because the derive macros currently always generate a T: Copy/T: Clone bound, even when it is unnecessary (see this issue).\nPlayground example\n"
] |
[
1
] |
[] |
[] |
[
"rust"
] |
stackoverflow_0074655953_rust.txt
|
Q:
Efficient retrieval of lat-lon points that are within a square boundary
I have a react-native application that populates pins on a map that have been submitted by users. The front end gets the corners of the window and then the back end goes through each pin to check if it falls within the boundary, and returns the ones that do.
This is taking too long on the backend and I want to ask the community for ideas, because I doubt I have the best one.
My idea is to store tables of pins grouped by quadrants, effectively a cache, and then I can in almost constant time return the pins from the quadrants involved.
Is there a simpler way to do this?
Maybe using NoSQL?
A:
A month later it seems geohashing is probably the best way, plus AWS has a library for automatically handling this with dynamodb. Apparently it takes the corners of the screen, lat/lon, and automatically returns the items from the DB in the view, in, I assume, constant time, since that's the whole point of geohashing, getting performance that works at scale..
https://www.npmjs.com/package/dynamodb-geo
https://aws.amazon.com/blogs/compute/implementing-geohashing-at-scale-in-serverless-web-applications/
Otherwise, using a geohashing library that is built for serving mobile apps likely exists.
|
Efficient retrieval of lat-lon points that are within a square boundary
|
I have a react-native application that populates pins on a map that have been submitted by users. The front end gets the corners of the window and then the back end goes through each pin to check if it falls within the boundary, and returns the ones that do.
This is taking too long on the backend and I want to ask the community for ideas, because I doubt I have the best one.
My idea is to store tables of pins grouped by quadrants, effectively a cache, and then I can in almost constant time return the pins from the quadrants involved.
Is there a simpler way to do this?
Maybe using NoSQL?
|
[
"A month later it seems geohashing is probably the best way, plus AWS has a library for automatically handling this with dynamodb. Apparently it takes the corners of the screen, lat/lon, and automatically returns the items from the DB in the view, in, I assume, constant time, since that's the whole point of geohashing, getting performance that works at scale..\nhttps://www.npmjs.com/package/dynamodb-geo\nhttps://aws.amazon.com/blogs/compute/implementing-geohashing-at-scale-in-serverless-web-applications/\nOtherwise, using a geohashing library that is built for serving mobile apps likely exists.\n"
] |
[
0
] |
[] |
[] |
[
"algorithm",
"database",
"maps",
"react_native"
] |
stackoverflow_0074472462_algorithm_database_maps_react_native.txt
|
Q:
List of quarter hours between two timestamps in python
I have two timestamps in python and I need to get all quarter hours between those timestamps. Any idea how to do this?
A:
To get a list of all quarter hours between two timestamps in Python, you can use the dateutil.rrule module to create a dateutil.rrule.rrule object with the freq argument set to dateutil.rrule.MINUTELY and the interval argument set to 15 to generate a list of datetime objects separated by 15 minute intervals. You can then iterate over the rrule object and extract the time from each datetime object to create a list of quarter-hour timestamps.
Here's an example:
from datetime import datetime
from dateutil.rrule import rrule, MINUTELY
# Timestamps for start and end
timestamp1 = 1605653932
timestamp2 = 1605656932
# Create a datetime object for the start timestamp
start = datetime.fromtimestamp(timestamp1)
# Create a datetime object for the end timestamp
end = datetime.fromtimestamp(timestamp2)
# Create a list of quarter-hour timestamps between the start and end timestamps
timestamps = []
for dt in rrule(freq=MINUTELY, interval=15, dtstart=start, until=end):
timestamps.append(dt.strftime('%H:%M'))
# Print the list of quarter-hour timestamps
print(timestamps)
This code will generate a list of quarter-hour timestamps in the format 'HH:MM', where HH is the hour and MM is the minute, for all quarter hours between the start and end timestamps. For the timestamps 1605653932 and 1605656932, the output would be:
['23:58', '00:13', '00:28', '00:43']
If you want to start with the next quarter-hour after the start timestamp, you can change the code as follows:
from datetime import datetime, timedelta
from dateutil.rrule import rrule, MINUTELY
# Timestamps for start and end
timestamp1 = 1605653932
timestamp2 = 1605656932
# Create a datetime object for the start timestamp
start = datetime.fromtimestamp(timestamp1)
# Create a datetime object for the end timestamp
end = datetime.fromtimestamp(timestamp2)
# Find the next quarter-hour after the start timestamp
start_minutes = start.minute
start_next_quarter_hour = start + timedelta(minutes=15 - start_minutes % 15)
# Create a list of quarter-hour timestamps between the start and end timestamps
timestamps = []
for dt in rrule(freq=MINUTELY, interval=15, dtstart=start_next_quarter_hour, until=end):
timestamps.append(dt.strftime('%H:%M'))
# Print the list of quarter-hour timestamps
print(timestamps)
This will give the following output:
['00:00', '00:15', '00:30', '00:45']
|
List of quarter hours between two timestamps in python
|
I have two timestamps in python and I need to get all quarter hours between those timestamps. Any idea how to do this?
|
[
"To get a list of all quarter hours between two timestamps in Python, you can use the dateutil.rrule module to create a dateutil.rrule.rrule object with the freq argument set to dateutil.rrule.MINUTELY and the interval argument set to 15 to generate a list of datetime objects separated by 15 minute intervals. You can then iterate over the rrule object and extract the time from each datetime object to create a list of quarter-hour timestamps.\nHere's an example:\nfrom datetime import datetime\nfrom dateutil.rrule import rrule, MINUTELY\n\n# Timestamps for start and end\ntimestamp1 = 1605653932\ntimestamp2 = 1605656932\n\n# Create a datetime object for the start timestamp\nstart = datetime.fromtimestamp(timestamp1)\n\n# Create a datetime object for the end timestamp\nend = datetime.fromtimestamp(timestamp2)\n\n# Create a list of quarter-hour timestamps between the start and end timestamps\ntimestamps = []\nfor dt in rrule(freq=MINUTELY, interval=15, dtstart=start, until=end):\n timestamps.append(dt.strftime('%H:%M'))\n\n# Print the list of quarter-hour timestamps\nprint(timestamps)\n\nThis code will generate a list of quarter-hour timestamps in the format 'HH:MM', where HH is the hour and MM is the minute, for all quarter hours between the start and end timestamps. For the timestamps 1605653932 and 1605656932, the output would be:\n['23:58', '00:13', '00:28', '00:43']\n\nIf you want to start with the next quarter-hour after the start timestamp, you can change the code as follows:\nfrom datetime import datetime, timedelta\nfrom dateutil.rrule import rrule, MINUTELY\n\n# Timestamps for start and end\ntimestamp1 = 1605653932\ntimestamp2 = 1605656932\n\n# Create a datetime object for the start timestamp\nstart = datetime.fromtimestamp(timestamp1)\n\n# Create a datetime object for the end timestamp\nend = datetime.fromtimestamp(timestamp2)\n\n# Find the next quarter-hour after the start timestamp\nstart_minutes = start.minute\nstart_next_quarter_hour = start + timedelta(minutes=15 - start_minutes % 15)\n\n# Create a list of quarter-hour timestamps between the start and end timestamps\ntimestamps = []\nfor dt in rrule(freq=MINUTELY, interval=15, dtstart=start_next_quarter_hour, until=end):\n timestamps.append(dt.strftime('%H:%M'))\n\n# Print the list of quarter-hour timestamps\nprint(timestamps)\n\nThis will give the following output:\n['00:00', '00:15', '00:30', '00:45']\n\n"
] |
[
0
] |
[] |
[] |
[
"python",
"timestamp"
] |
stackoverflow_0074661759_python_timestamp.txt
|
Q:
Syntax error or access violation: 1059 Identifier name is too long
I receive a MySQL error when I create a table:
SQLSTATE[42000]: Syntax error or access violation: 1059 Identifier name 'FK_SALES_FLAT_CREDITMEMO_GRID_ARCHIVE_STORE_ID_CORE_STORE_STORE_ID' is too long
How can the default Identifier name size be increased or how can I solve this otherwise?
A:
Please take a look at http://dev.mysql.com/doc/refman/5.5/en/identifiers.html - you are limited to 64 chars to an identifier.
A:
Provide your own shot name to key.
$table->unique(['product_id', 'company_id', 'price', 'delivery_hours'], 'prices_history_index_unique');
A:
Just give the primary key a shorter name.
Like this:
$table->primary(['company_store_id', 'company_product_id'], 'product_store_id');
|
Syntax error or access violation: 1059 Identifier name is too long
|
I receive a MySQL error when I create a table:
SQLSTATE[42000]: Syntax error or access violation: 1059 Identifier name 'FK_SALES_FLAT_CREDITMEMO_GRID_ARCHIVE_STORE_ID_CORE_STORE_STORE_ID' is too long
How can the default Identifier name size be increased or how can I solve this otherwise?
|
[
"Please take a look at http://dev.mysql.com/doc/refman/5.5/en/identifiers.html - you are limited to 64 chars to an identifier.\n",
"Provide your own shot name to key.\n$table->unique(['product_id', 'company_id', 'price', 'delivery_hours'], 'prices_history_index_unique');\n\n",
"Just give the primary key a shorter name.\nLike this:\n$table->primary(['company_store_id', 'company_product_id'], 'product_store_id');\n\n"
] |
[
41,
16,
0
] |
[] |
[] |
[
"mysql"
] |
stackoverflow_0013133517_mysql.txt
|
Q:
How can I emulate gamma correction with CSS3 filters?
According to this page http://www.w3schools.com/cssref/css3_pr_filter.asp
there are contrast, brighness, hue, saturation, etc. But no explicit access to gamma. Is there a way to emulate it with the existing CSS3 image filters, or does exist a plugin (JQuery or other JS) which makes it possible?
A:
Gamma is more closely related to contrast than anything. While there isn't explicitly a filter for it, you could get near identical results by using small adjustments to brightness and working with contrast primarily.
For example if I wanted to raise the gamma on an image that looks too dark I might try:
filter: contrast(125%) brightness(105%);
keeping in mind to use the brightness primarily to brighten up the darkness in the image, the contrast should be doing most of the work in the case that you want to closely emulate gamma.
Feel free to check out a topic asking about gamma vs brightness here:
https://graphicdesign.stackexchange.com/questions/11445/gamma-vs-brightness-any-difference
Hope that helped. Cheers.
A:
You can use svg filters
#filtered {
filter: url('#gamma');
}
<img src="https://picsum.photos/seed/picsum/300/200">
<img id="filtered" src="https://picsum.photos/seed/picsum/300/200">
<svg height="0">
<filter id="gamma">
<feComponentTransfer>
<feFuncR type="gamma" exponent="1.5" amplitude="2.5" offset="0" />
<feFuncG type="gamma" exponent="1.5" amplitude="2.5" offset="0" />
<feFuncB type="gamma" exponent="1.5" amplitude="2.5" offset="0" />
</feComponentTransfer>
</filter>
</svg>
|
How can I emulate gamma correction with CSS3 filters?
|
According to this page http://www.w3schools.com/cssref/css3_pr_filter.asp
there are contrast, brighness, hue, saturation, etc. But no explicit access to gamma. Is there a way to emulate it with the existing CSS3 image filters, or does exist a plugin (JQuery or other JS) which makes it possible?
|
[
"Gamma is more closely related to contrast than anything. While there isn't explicitly a filter for it, you could get near identical results by using small adjustments to brightness and working with contrast primarily.\nFor example if I wanted to raise the gamma on an image that looks too dark I might try:\nfilter: contrast(125%) brightness(105%);\n\nkeeping in mind to use the brightness primarily to brighten up the darkness in the image, the contrast should be doing most of the work in the case that you want to closely emulate gamma.\nFeel free to check out a topic asking about gamma vs brightness here:\nhttps://graphicdesign.stackexchange.com/questions/11445/gamma-vs-brightness-any-difference\nHope that helped. Cheers.\n",
"You can use svg filters\n\n\n#filtered {\n filter: url('#gamma');\n}\n<img src=\"https://picsum.photos/seed/picsum/300/200\">\n<img id=\"filtered\" src=\"https://picsum.photos/seed/picsum/300/200\">\n\n<svg height=\"0\">\n <filter id=\"gamma\">\n <feComponentTransfer>\n <feFuncR type=\"gamma\" exponent=\"1.5\" amplitude=\"2.5\" offset=\"0\" />\n <feFuncG type=\"gamma\" exponent=\"1.5\" amplitude=\"2.5\" offset=\"0\" />\n <feFuncB type=\"gamma\" exponent=\"1.5\" amplitude=\"2.5\" offset=\"0\" />\n </feComponentTransfer>\n </filter>\n</svg>\n\n\n\n"
] |
[
3,
0
] |
[] |
[] |
[
"brightness",
"css",
"gamma",
"image",
"javascript"
] |
stackoverflow_0035142958_brightness_css_gamma_image_javascript.txt
|
Q:
Visual Studio 2022: What is the Feature recommending style changes?
I recently updated to Visual Studio 2022 to 17.4 and noticed the editor starting adding red box highlighting around multiple lines trying to encourage me to change it to a single very long line.
What is this feature called and how do I configure it?
NOTE: I'm pretty sure this is coming from Visual Studio - I also run ReSharper, but this doesn't look a ReSharper plugin; there's no signature yellow light bulb.
A:
Looks like Intellicode. In Tools:Options:Intellicode:General, try unchecking "C# suggestions". Notice that it is suggesting some real changes, not just a formatting change.
|
Visual Studio 2022: What is the Feature recommending style changes?
|
I recently updated to Visual Studio 2022 to 17.4 and noticed the editor starting adding red box highlighting around multiple lines trying to encourage me to change it to a single very long line.
What is this feature called and how do I configure it?
NOTE: I'm pretty sure this is coming from Visual Studio - I also run ReSharper, but this doesn't look a ReSharper plugin; there's no signature yellow light bulb.
|
[
"Looks like Intellicode. In Tools:Options:Intellicode:General, try unchecking \"C# suggestions\". Notice that it is suggesting some real changes, not just a formatting change.\n"
] |
[
0
] |
[] |
[] |
[
"visual_studio",
"visual_studio_2022"
] |
stackoverflow_0074649784_visual_studio_visual_studio_2022.txt
|
Q:
how to develop touchscreen apps for a big touchscreen using react?
hi im a react developer and was wondering what is the best modern way to create a simple app for modern touchscreen devices. A simple app with one page, some links which lead to assets which can be viewed.
A:
I can't say what is best, because it would just be an opinion, but if you are already a react dev, React Native would allow you to jump in fairly quickly with mobile app development. If you are just looking to make a web app, You will be able to accomplish everything you would like to do in standard React.
A:
If what you want is a mobile application, you can start using React Native. One way to quickly get into development with React Native is to use Expo https://expo.dev/
You could also create a Progressive Web App, which is basically like a web app, but installable.
|
how to develop touchscreen apps for a big touchscreen using react?
|
hi im a react developer and was wondering what is the best modern way to create a simple app for modern touchscreen devices. A simple app with one page, some links which lead to assets which can be viewed.
|
[
"I can't say what is best, because it would just be an opinion, but if you are already a react dev, React Native would allow you to jump in fairly quickly with mobile app development. If you are just looking to make a web app, You will be able to accomplish everything you would like to do in standard React.\n",
"If what you want is a mobile application, you can start using React Native. One way to quickly get into development with React Native is to use Expo https://expo.dev/\nYou could also create a Progressive Web App, which is basically like a web app, but installable.\n"
] |
[
0,
0
] |
[] |
[] |
[
"react_native",
"reactjs",
"touchscreen"
] |
stackoverflow_0074661576_react_native_reactjs_touchscreen.txt
|
Q:
createsuperuser is not working with custom user model
I know this question is a repeat (I've looked at similar posts here and here although the I still couldn't anywhere with the solutions. I've created a custom user model for my application but when I create a superuser I can't seem to sign in on the admin panel. What have I done wrong?
user model
from django.db import models
from django.contrib.auth.models import AbstractBaseUser, BaseUserManager
class UserAccountManager(BaseUserManager):
def create_user(self, email, username, password=None):
if not email:
raise ValueError('Users must provide an email to create an account')
if not username:
raise ValueError('Users must provide a username to create an account')
user = self.model(
email=self.normalize_email(email),
username=username
)
user.set_password(password)
user.save(using=self._db)
return user
def create_superuser(self, email, username, password):
user = self.create_user(
email=self.normalize_email(email),
username=username,
password=password
)
user.is_admin = True
user.is_staff = True
user.is_superuser = True
user.save(using=self._db)
return user
class UserAccount(AbstractBaseUser):
email = models.EmailField(verbose_name='email', max_length=60, unique=True)
username = models.CharField(max_length=30, unique=True)
date_joined = models.DateTimeField(verbose_name='date joined', auto_now_add=True)
last_login = models.DateTimeField(verbose_name='last_login', auto_now_add=True)
is_admin = models.BooleanField(default=True)
is_active = models.BooleanField(default=False)
is_superuser = models.BooleanField(default=False)
USERNAME_FIELD = 'email'
REQUIRED_FIELDS = ['username']
objects = UserAccountManager()
def __str__(self):
return f'email: {self.email}\nusername: {self.username}'
def has_perm(self, perm, obj=None):
return self.is_admin
def has_module_perm(self, app_label):
return True
my settings
from pathlib import Path
# Build paths inside the project like this: BASE_DIR / 'subdir'.
BASE_DIR = Path(__file__).resolve().parent.parent
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = 'django-insecure-l9=z0v(f%w$s9wx2)8$bgz&kfd##ap6rk&ug%hu^q3ju*04q%+'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = []
# set custom user model
AUTH_USER_MODEL = 'account.UserAccount'
# Application definition
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
# additions
'rest_framework',
'account'
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
ROOT_URLCONF = 'type_tag.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'type_tag.wsgi.application'
# Database
# https://docs.djangoproject.com/en/4.1/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': BASE_DIR / 'db.sqlite3',
}
}
# Password validation
# https://docs.djangoproject.com/en/4.1/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/4.1/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/4.1/howto/static-files/
STATIC_URL = 'static/'
# Default primary key field type
# https://docs.djangoproject.com/en/4.1/ref/settings/#default-auto-field
DEFAULT_AUTO_FIELD = 'django.db.models.BigAutoField'
A:
Your create_superuser does not have user.is_active = True and you is_staff = models.BooleanField(default=False) is also missing from your model, and you need to have PermissionsMixin if you are using AbstractBaseUser
Below is your working model and UserAccountManager, check it:
from django.db import models
from django.contrib.auth.models import AbstractBaseUser, BaseUserManager, PermissionsMixin
class UserAccountManager(BaseUserManager):
def create_user(self, email, username, password=None):
if not email:
raise ValueError('Users must provide an email to create an account')
if not username:
raise ValueError('Users must provide a username to create an account')
user = self.model(
email=self.normalize_email(email),
username=username
)
user.set_password(password)
user.save(using=self._db)
return user
def create_superuser(self, email, username, password):
user = self.create_user(
email=self.normalize_email(email),
username=username,
password=password
)
user.is_admin = True
user.is_staff = True
user.is_superuser = True
user.is_active = True
user.save(using=self._db)
return user
class UserAccount(AbstractBaseUser, PermissionsMixin):
email = models.EmailField(verbose_name='email', max_length=60, unique=True)
username = models.CharField(max_length=30, unique=True)
date_joined = models.DateTimeField(verbose_name='date joined', auto_now_add=True)
last_login = models.DateTimeField(verbose_name='last_login', auto_now_add=True)
is_admin = models.BooleanField(default=True)
is_active = models.BooleanField(default=False)
is_staff = models.BooleanField(default=False)
is_superuser = models.BooleanField(default=False)
USERNAME_FIELD = 'email'
REQUIRED_FIELDS = ['username']
objects = UserAccountManager()
def __str__(self):
return f'email: {self.email}\nusername: {self.username}'
def has_perm(self, perm, obj=None):
return self.is_admin
def has_module_perm(self, app_label):
return True
|
createsuperuser is not working with custom user model
|
I know this question is a repeat (I've looked at similar posts here and here although the I still couldn't anywhere with the solutions. I've created a custom user model for my application but when I create a superuser I can't seem to sign in on the admin panel. What have I done wrong?
user model
from django.db import models
from django.contrib.auth.models import AbstractBaseUser, BaseUserManager
class UserAccountManager(BaseUserManager):
def create_user(self, email, username, password=None):
if not email:
raise ValueError('Users must provide an email to create an account')
if not username:
raise ValueError('Users must provide a username to create an account')
user = self.model(
email=self.normalize_email(email),
username=username
)
user.set_password(password)
user.save(using=self._db)
return user
def create_superuser(self, email, username, password):
user = self.create_user(
email=self.normalize_email(email),
username=username,
password=password
)
user.is_admin = True
user.is_staff = True
user.is_superuser = True
user.save(using=self._db)
return user
class UserAccount(AbstractBaseUser):
email = models.EmailField(verbose_name='email', max_length=60, unique=True)
username = models.CharField(max_length=30, unique=True)
date_joined = models.DateTimeField(verbose_name='date joined', auto_now_add=True)
last_login = models.DateTimeField(verbose_name='last_login', auto_now_add=True)
is_admin = models.BooleanField(default=True)
is_active = models.BooleanField(default=False)
is_superuser = models.BooleanField(default=False)
USERNAME_FIELD = 'email'
REQUIRED_FIELDS = ['username']
objects = UserAccountManager()
def __str__(self):
return f'email: {self.email}\nusername: {self.username}'
def has_perm(self, perm, obj=None):
return self.is_admin
def has_module_perm(self, app_label):
return True
my settings
from pathlib import Path
# Build paths inside the project like this: BASE_DIR / 'subdir'.
BASE_DIR = Path(__file__).resolve().parent.parent
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = 'django-insecure-l9=z0v(f%w$s9wx2)8$bgz&kfd##ap6rk&ug%hu^q3ju*04q%+'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = []
# set custom user model
AUTH_USER_MODEL = 'account.UserAccount'
# Application definition
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
# additions
'rest_framework',
'account'
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
ROOT_URLCONF = 'type_tag.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'type_tag.wsgi.application'
# Database
# https://docs.djangoproject.com/en/4.1/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': BASE_DIR / 'db.sqlite3',
}
}
# Password validation
# https://docs.djangoproject.com/en/4.1/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/4.1/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/4.1/howto/static-files/
STATIC_URL = 'static/'
# Default primary key field type
# https://docs.djangoproject.com/en/4.1/ref/settings/#default-auto-field
DEFAULT_AUTO_FIELD = 'django.db.models.BigAutoField'
|
[
"Your create_superuser does not have user.is_active = True and you is_staff = models.BooleanField(default=False) is also missing from your model, and you need to have PermissionsMixin if you are using AbstractBaseUser\nBelow is your working model and UserAccountManager, check it:\nfrom django.db import models\nfrom django.contrib.auth.models import AbstractBaseUser, BaseUserManager, PermissionsMixin\n\nclass UserAccountManager(BaseUserManager):\n\n def create_user(self, email, username, password=None):\n if not email:\n raise ValueError('Users must provide an email to create an account')\n if not username:\n raise ValueError('Users must provide a username to create an account')\n\n user = self.model(\n email=self.normalize_email(email),\n username=username\n )\n\n user.set_password(password)\n user.save(using=self._db)\n return user\n\n def create_superuser(self, email, username, password):\n\n user = self.create_user(\n email=self.normalize_email(email),\n username=username,\n password=password\n )\n user.is_admin = True\n user.is_staff = True\n user.is_superuser = True\n user.is_active = True\n user.save(using=self._db)\n return user\n\n\nclass UserAccount(AbstractBaseUser, PermissionsMixin):\n email = models.EmailField(verbose_name='email', max_length=60, unique=True)\n username = models.CharField(max_length=30, unique=True)\n date_joined = models.DateTimeField(verbose_name='date joined', auto_now_add=True)\n last_login = models.DateTimeField(verbose_name='last_login', auto_now_add=True)\n is_admin = models.BooleanField(default=True)\n is_active = models.BooleanField(default=False)\n is_staff = models.BooleanField(default=False)\n is_superuser = models.BooleanField(default=False)\n\n USERNAME_FIELD = 'email'\n REQUIRED_FIELDS = ['username']\n\n objects = UserAccountManager()\n\n def __str__(self):\n return f'email: {self.email}\\nusername: {self.username}'\n\n def has_perm(self, perm, obj=None):\n return self.is_admin\n\n def has_module_perm(self, app_label):\n return True\n\n"
] |
[
1
] |
[] |
[] |
[
"django",
"django_models",
"python_3.x"
] |
stackoverflow_0074659768_django_django_models_python_3.x.txt
|
Q:
Puppeteer has issues going through a website and gets an error trying to
Issue
I have been trying to find the root issue but hasn't been able to replicate it,
trying to reload several time on Nansen would get you to this page.
I couldn't find an situation.
I am running the application on NODEJS with the module Puppeteer.
I tried refreshing the browser, it somewhat works but sometimes, requests in the website would stop due to that "Cloudflare" error.
A:
According to your screenshot, your browser was blocked by the web application firewall - WAF.
At the bottom of the page you can find "Cloudflare Ray ID", it looks like:
Cloudflare Ray ID: 616892cddd3ab3fa
but you will have your own value here.
You can find the reason of block by using "firewall events" and using the Cloudflare Ray ID copied from previous step.
|
Puppeteer has issues going through a website and gets an error trying to
|
Issue
I have been trying to find the root issue but hasn't been able to replicate it,
trying to reload several time on Nansen would get you to this page.
I couldn't find an situation.
I am running the application on NODEJS with the module Puppeteer.
I tried refreshing the browser, it somewhat works but sometimes, requests in the website would stop due to that "Cloudflare" error.
|
[
"According to your screenshot, your browser was blocked by the web application firewall - WAF.\nAt the bottom of the page you can find \"Cloudflare Ray ID\", it looks like:\n\nCloudflare Ray ID: 616892cddd3ab3fa\nbut you will have your own value here.\n\nYou can find the reason of block by using \"firewall events\" and using the Cloudflare Ray ID copied from previous step.\n"
] |
[
0
] |
[] |
[] |
[
"cloudflare",
"node.js",
"puppeteer",
"web_scraping"
] |
stackoverflow_0074659993_cloudflare_node.js_puppeteer_web_scraping.txt
|
Q:
how to convert text to word embeddings using bert's pretrained model 'faster'?
I'm trying to get word embeddings for clinical data using microsoft/pubmedbert. I have 3.6 million text rows. Converting texts to vectors for 10k rows takes around 30 minutes. So for 3.6 million rows, it would take around - 180 hours(8days approx).
Is there any method where I can speed up the process?
My code -
from transformers import AutoTokenizer
from transformers import pipeline
model_name = "microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext"
tokenizer = AutoTokenizer.from_pretrained(model_name)
classifier = pipeline('feature-extraction',model=model_name, tokenizer=tokenizer)
def lambda_func(row):
tokens = tokenizer(row['notetext'])
if len(tokens['input_ids'])>512:
tokens = re.split(r'\b', row['notetext'])
tokens= [t for t in tokens if len(t) > 0 ]
row['notetext'] = ''.join(tokens[:512])
row['vectors'] = classifier(row['notetext'])[0][0]
return row
def process(progress_notes):
progress_notes = progress_notes.apply(lambda_func, axis=1)
return progress_notes
progress_notes = process(progress_notes)
vectors_breadth = 768
vectors_length = len(progress_notes)
vectors_2d = np.reshape(progress_notes['vectors'].to_list(), (vectors_length, vectors_breadth))
vectors_df = pd.DataFrame(vectors_2d)
My progress_notes dataframe looks like -
progress_notes = pd.DataFrame({'id':[1,2,3],'progressnotetype':['Nursing Note', 'Nursing Note', 'Administration Note'], 'notetext': ['Patient\'s skin is grossly intact with exception of skin tear to r inner elbow and r lateral lower leg','Patient with history of Afib with RVR. Patient is incontinent of bowel and bladder.','Give 2 tablet by mouth every 4 hours as needed for Mild to moderate Pain Not to exceed 3 grams in 24 hours']})
Note - 1) I'm running the code on aws ec2 instance r5.8x large(32
CPUs) - I tried using multiprocessing but the code goes into a
deadlock because bert takes all my cpu cores.
A:
isn't just ? apply is not the fastest method.
from sentence_transformers import SentenceTransformer
sbert_model = SentenceTransformer('microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext')
document_embeddings = sbert_model.encode(pd.Series(['hello', 'cell type', 'protein']))
document_embeddings
you will get an output like
array([[ 0.06255245, 0.14945783, -0.06224129, ..., -0.11892398,
-0.0507343 , 0.0153866 ],
[-0.17571464, 0.03554079, -0.04899959, ..., -0.24369009,
-0.00672011, 0.04914075],
[-0.22093703, -0.03271236, -0.08943298, ..., -0.21335356,
0.11418738, -0.09207606]], dtype=float32)
|
how to convert text to word embeddings using bert's pretrained model 'faster'?
|
I'm trying to get word embeddings for clinical data using microsoft/pubmedbert. I have 3.6 million text rows. Converting texts to vectors for 10k rows takes around 30 minutes. So for 3.6 million rows, it would take around - 180 hours(8days approx).
Is there any method where I can speed up the process?
My code -
from transformers import AutoTokenizer
from transformers import pipeline
model_name = "microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext"
tokenizer = AutoTokenizer.from_pretrained(model_name)
classifier = pipeline('feature-extraction',model=model_name, tokenizer=tokenizer)
def lambda_func(row):
tokens = tokenizer(row['notetext'])
if len(tokens['input_ids'])>512:
tokens = re.split(r'\b', row['notetext'])
tokens= [t for t in tokens if len(t) > 0 ]
row['notetext'] = ''.join(tokens[:512])
row['vectors'] = classifier(row['notetext'])[0][0]
return row
def process(progress_notes):
progress_notes = progress_notes.apply(lambda_func, axis=1)
return progress_notes
progress_notes = process(progress_notes)
vectors_breadth = 768
vectors_length = len(progress_notes)
vectors_2d = np.reshape(progress_notes['vectors'].to_list(), (vectors_length, vectors_breadth))
vectors_df = pd.DataFrame(vectors_2d)
My progress_notes dataframe looks like -
progress_notes = pd.DataFrame({'id':[1,2,3],'progressnotetype':['Nursing Note', 'Nursing Note', 'Administration Note'], 'notetext': ['Patient\'s skin is grossly intact with exception of skin tear to r inner elbow and r lateral lower leg','Patient with history of Afib with RVR. Patient is incontinent of bowel and bladder.','Give 2 tablet by mouth every 4 hours as needed for Mild to moderate Pain Not to exceed 3 grams in 24 hours']})
Note - 1) I'm running the code on aws ec2 instance r5.8x large(32
CPUs) - I tried using multiprocessing but the code goes into a
deadlock because bert takes all my cpu cores.
|
[
"isn't just ? apply is not the fastest method.\nfrom sentence_transformers import SentenceTransformer\nsbert_model = SentenceTransformer('microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext')\ndocument_embeddings = sbert_model.encode(pd.Series(['hello', 'cell type', 'protein']))\ndocument_embeddings\n\nyou will get an output like\narray([[ 0.06255245, 0.14945783, -0.06224129, ..., -0.11892398,\n -0.0507343 , 0.0153866 ],\n [-0.17571464, 0.03554079, -0.04899959, ..., -0.24369009,\n -0.00672011, 0.04914075],\n [-0.22093703, -0.03271236, -0.08943298, ..., -0.21335356,\n 0.11418738, -0.09207606]], dtype=float32)\n\n"
] |
[
0
] |
[] |
[] |
[
"bert_language_model",
"nlp",
"python_3.x",
"word_embedding"
] |
stackoverflow_0065494850_bert_language_model_nlp_python_3.x_word_embedding.txt
|
Q:
Detailed Balance equations in R
I have a matrix and a vector in R.
The matrix is :
P = matrix(c(0,2/5,3/5,
1/2,1/4,1/4,
1/2,1/6,1/3),3,3,byrow = TRUE)
P
and the vector is :
p = c(1/3,4/15,2/5)
I want to create a function have will have two arguments:
i)the matrix and
ii) the vector
Inside the function I want to check all the detailed balance equations p_iP_ij = p_jP_ji if all these are equal then to return a message "balanced" otherwise "not_balanced"
For example :
p_1P_12 = 1/3*2/5 = 2/15
p_2P_21 = 4/15*1/2 =2/15
So p_1P_12 = p_2P_21
p_1P_13 = 1/3*3/5=3/15=1/5
p_3P_31 = 2/5*1/2=2/10=1/5
p_1P_13 = p_3P_31
p_2P_23 = 4/15*1/4=4/60=1/15
p_3P_32 = 2/5*1/6=2/30=1/15
p_2P_23=p_3P_32
No this is an example from Robert Dobrow's book Introduction to stochastic processes in R and is about a 3x3 matrix and a vector of length 3.
In general it might be a A \in R^{m \times m} matrix and a vector of length m.
How can I do it in R ?
A:
You could write a function to accomplish this:
is_balanced <- function(Mat, vec){
Matrix::isSymmetric(Mat * vec)
}
is_balanced(P,p)
[1] TRUE
using for-loop:
is_balanced2 <- function(Mat, vec){
l <- length(vec)
for(i in seq(1, l-1)){
for(j in seq(i+1, l))
res <- abs(vec[i] * Mat[i, j]- vec[j] * Mat[j,i])
if( res > .Machine$double.eps)return(FALSE)
}
return(TRUE)
}
|
Detailed Balance equations in R
|
I have a matrix and a vector in R.
The matrix is :
P = matrix(c(0,2/5,3/5,
1/2,1/4,1/4,
1/2,1/6,1/3),3,3,byrow = TRUE)
P
and the vector is :
p = c(1/3,4/15,2/5)
I want to create a function have will have two arguments:
i)the matrix and
ii) the vector
Inside the function I want to check all the detailed balance equations p_iP_ij = p_jP_ji if all these are equal then to return a message "balanced" otherwise "not_balanced"
For example :
p_1P_12 = 1/3*2/5 = 2/15
p_2P_21 = 4/15*1/2 =2/15
So p_1P_12 = p_2P_21
p_1P_13 = 1/3*3/5=3/15=1/5
p_3P_31 = 2/5*1/2=2/10=1/5
p_1P_13 = p_3P_31
p_2P_23 = 4/15*1/4=4/60=1/15
p_3P_32 = 2/5*1/6=2/30=1/15
p_2P_23=p_3P_32
No this is an example from Robert Dobrow's book Introduction to stochastic processes in R and is about a 3x3 matrix and a vector of length 3.
In general it might be a A \in R^{m \times m} matrix and a vector of length m.
How can I do it in R ?
|
[
"You could write a function to accomplish this:\nis_balanced <- function(Mat, vec){\n Matrix::isSymmetric(Mat * vec)\n}\n\nis_balanced(P,p)\n[1] TRUE\n\n\nusing for-loop:\nis_balanced2 <- function(Mat, vec){\n l <- length(vec)\n for(i in seq(1, l-1)){\n for(j in seq(i+1, l))\n res <- abs(vec[i] * Mat[i, j]- vec[j] * Mat[j,i])\n if( res > .Machine$double.eps)return(FALSE)\n }\n return(TRUE)\n}\n\n"
] |
[
2
] |
[] |
[] |
[
"function",
"matrix",
"r"
] |
stackoverflow_0074661589_function_matrix_r.txt
|
Q:
Reading the last line of an empty file on python
I have this function on my code that is supposed to read a files last line, and if there is no file create one. My issue is when it creates the files and tries to read the last line it comes up as an error.
with open(HIGH_SCORES_FILE_PATH, "w+") as file:
last_line = file.readlines()[-1]
if last_line == '\n':
with open(HIGH_SCORES_FILE_PATH, 'a') as file:
file.write('Jogo:')
file.write('\n')
file.write(str(0))
file.write('\n')
I have tried multiple ways of reading the last line but all of the ones I've tried ends in an error.
A:
Opening a file in "w+" erases any content in the file. readlines() returns an empty list and trying to get value results in an IndexError. You can test for a file's existence with os.path.exists or os.path.isfile, or you could use an exception handler to deal with that case.
Start with last_line set to a sentinel value. If the open fails, or if no lines are read, last_line will not be updated and you can base file creation on that.
last_line = None
try:
with open(HIGH_SCORES_FILE_PATH) as file:
for last_line in file:
pass
except OSError:
pass
if last_line is None:
with open(HIGH_SCORES_FILE_PATH, "w") as file:
file.write('Jogo:\n0\n')
last_line = '0\n'
A:
To read the last line of a file, you can use the seek method and set the position to the beginning of the file, then move the file cursor to the end of the file. Then, you can use the readline method to read the last line.
with open(HIGH_SCORES_FILE_PATH, "w+") as file:
file.seek(0, 2) # Move cursor to the end of the file
last_line = file.readline()
if last_line == '\n':
with open(HIGH_SCORES_FILE_PATH, 'a') as file:
file.write('Jogo:')
file.write('\n')
file.write(str(0))
file.write('\n')
Note that if the file is empty, readline will return an empty string, so you should check for that case as well.
with open(HIGH_SCORES_FILE_PATH, "w+") as file:
file.seek(0, 2) # Move cursor to the end of the file
last_line = file.readline()
if last_line == '' or last_line == '\n':
with open(HIGH_SCORES_FILE_PATH, 'a') as file:
file.write('Jogo:')
file.write('\n')
file.write(str(0))
file.write('\n')
|
Reading the last line of an empty file on python
|
I have this function on my code that is supposed to read a files last line, and if there is no file create one. My issue is when it creates the files and tries to read the last line it comes up as an error.
with open(HIGH_SCORES_FILE_PATH, "w+") as file:
last_line = file.readlines()[-1]
if last_line == '\n':
with open(HIGH_SCORES_FILE_PATH, 'a') as file:
file.write('Jogo:')
file.write('\n')
file.write(str(0))
file.write('\n')
I have tried multiple ways of reading the last line but all of the ones I've tried ends in an error.
|
[
"Opening a file in \"w+\" erases any content in the file. readlines() returns an empty list and trying to get value results in an IndexError. You can test for a file's existence with os.path.exists or os.path.isfile, or you could use an exception handler to deal with that case.\nStart with last_line set to a sentinel value. If the open fails, or if no lines are read, last_line will not be updated and you can base file creation on that.\nlast_line = None\ntry:\n with open(HIGH_SCORES_FILE_PATH) as file:\n for last_line in file:\n pass \nexcept OSError:\n pass\n\nif last_line is None:\n with open(HIGH_SCORES_FILE_PATH, \"w\") as file:\n file.write('Jogo:\\n0\\n')\n last_line = '0\\n'\n\n",
"To read the last line of a file, you can use the seek method and set the position to the beginning of the file, then move the file cursor to the end of the file. Then, you can use the readline method to read the last line.\nwith open(HIGH_SCORES_FILE_PATH, \"w+\") as file:\n file.seek(0, 2) # Move cursor to the end of the file\n last_line = file.readline()\n if last_line == '\\n':\n with open(HIGH_SCORES_FILE_PATH, 'a') as file:\n file.write('Jogo:')\n file.write('\\n')\n file.write(str(0))\n file.write('\\n')\n\nNote that if the file is empty, readline will return an empty string, so you should check for that case as well.\nwith open(HIGH_SCORES_FILE_PATH, \"w+\") as file:\n file.seek(0, 2) # Move cursor to the end of the file\n last_line = file.readline()\n if last_line == '' or last_line == '\\n':\n with open(HIGH_SCORES_FILE_PATH, 'a') as file:\n file.write('Jogo:')\n file.write('\\n')\n file.write(str(0))\n file.write('\\n')\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"file",
"python"
] |
stackoverflow_0074661682_file_python.txt
|
Q:
How to clean up the docker system, without removing certain images?
To clean up the docker system there's the useful command
docker system prune -a
with its various options. I find myself often executing that to clean up the mess I've made when trying out different things in development, only to later re-download a bunch of images. This normally takes a long time (5-10 minutes) in my use cases, and unnecessarily hogs the bandwidth of the image libraries I use.
What I'd like is to know if there is a simple way to prevent certain images (ones that I know I will need again in the near future) from being pruned while still retaining the convenience of pruning the docker system in a single command?
One way I could do this is start a container from the images I want to keep before pruning, and then prune, but that defeats the convenience of a one-liner above. In addition, those containers need stopping and removing after pruning as they are an unnecessary burden on my resources at that point.
Is there a simple(r) way to do this?
A:
You can make your own images and LABEL them like this Dockerfile:
FROM ubuntu:20.04
LABEL myimage=keepit
You build this Dockerfile with docker build -t myubuntu .
And then you can use filter for this label to not prune it: docker prune -a --filter label!="myimage=keepit"
Docs: https://docs.docker.com/engine/reference/commandline/system_prune/#filtering
|
How to clean up the docker system, without removing certain images?
|
To clean up the docker system there's the useful command
docker system prune -a
with its various options. I find myself often executing that to clean up the mess I've made when trying out different things in development, only to later re-download a bunch of images. This normally takes a long time (5-10 minutes) in my use cases, and unnecessarily hogs the bandwidth of the image libraries I use.
What I'd like is to know if there is a simple way to prevent certain images (ones that I know I will need again in the near future) from being pruned while still retaining the convenience of pruning the docker system in a single command?
One way I could do this is start a container from the images I want to keep before pruning, and then prune, but that defeats the convenience of a one-liner above. In addition, those containers need stopping and removing after pruning as they are an unnecessary burden on my resources at that point.
Is there a simple(r) way to do this?
|
[
"You can make your own images and LABEL them like this Dockerfile:\nFROM ubuntu:20.04\nLABEL myimage=keepit\n\nYou build this Dockerfile with docker build -t myubuntu .\nAnd then you can use filter for this label to not prune it: docker prune -a --filter label!=\"myimage=keepit\"\nDocs: https://docs.docker.com/engine/reference/commandline/system_prune/#filtering\n"
] |
[
1
] |
[] |
[] |
[
"docker",
"docker_image"
] |
stackoverflow_0074657258_docker_docker_image.txt
|
Q:
Send Big query table rows to Kafka avro message using apache beam
I need to publish the Big query table rows to Kafka in Avro format.
PCollection<TableRow> rows =
pipeline
.apply(
"Read from BigQuery query",
BigQueryIO.readTableRows().from(String.format("%s:%s.%s", project, dataset, table))
//How to convert rows to avro format?
rows.apply(KafkaIO.<Long, ???>write()
.withBootstrapServers("kafka:29092")
.withTopic("test")
.withValueSerializer(KafkaAvorSerializer.class)
);
How to convert TableRow to Avro format?
A:
Use MapElements
rows.apply(MapElements.via(new SimpleFunction<Tabelrows, GenericRecord>() {
@Override
public GenericRecord apply(Tabelrows input) {
log.info("Parsing {} to Avro", input);
return null; // TODO: Replace with Avro object
}
});
If Tabelrows is a collection-type that you want to convert to many records, you can use FlatMapElements instead.
As for writing to Kafka, I wrote a simple example
|
Send Big query table rows to Kafka avro message using apache beam
|
I need to publish the Big query table rows to Kafka in Avro format.
PCollection<TableRow> rows =
pipeline
.apply(
"Read from BigQuery query",
BigQueryIO.readTableRows().from(String.format("%s:%s.%s", project, dataset, table))
//How to convert rows to avro format?
rows.apply(KafkaIO.<Long, ???>write()
.withBootstrapServers("kafka:29092")
.withTopic("test")
.withValueSerializer(KafkaAvorSerializer.class)
);
How to convert TableRow to Avro format?
|
[
"Use MapElements\nrows.apply(MapElements.via(new SimpleFunction<Tabelrows, GenericRecord>() {\n @Override\n public GenericRecord apply(Tabelrows input) {\n log.info(\"Parsing {} to Avro\", input);\n return null; // TODO: Replace with Avro object\n }\n});\n\nIf Tabelrows is a collection-type that you want to convert to many records, you can use FlatMapElements instead.\nAs for writing to Kafka, I wrote a simple example\n"
] |
[
0
] |
[] |
[] |
[
"apache_beam",
"apache_beam_kafkaio",
"apache_kafka",
"avro",
"java"
] |
stackoverflow_0074659042_apache_beam_apache_beam_kafkaio_apache_kafka_avro_java.txt
|
Q:
Why is my exit statement not working properly?
the goal is to move between rooms in this simplified version of a text base game. The code works exactly as planned except for if you try and input 'exit' directly after inputting 'instructions'. after inputting 'instructions' the first 'exit' get ran in the else invalid statement then the second 'exit' input exits the game as intended. If you continue at least one input after 'instructions' than exit works properly as well.
rooms = {
'Great Hall': {'South': 'Bedroom'},
'Bedroom': {'North': 'Great Hall', 'East': 'Cellar'},
'Cellar': {'West': 'Bedroom'}
}
def instruction():
"""Function to give instructions on how to play the game"""
print('Welcome to Module 6 Milestone')
print('Move commands are go North, go South, go East, go West')
print('Typing exit will exit the game')
print('Inputting instructions will remind you of the game instructions')
print('Good luck may the odds be in your favor')
def invalid():
"""Function for if an invalid input is entered"""
print('------------')
print('Whoops invalid command, try again')
print('------------')
def main():
"""Main function that runs the movement between rooms"""
current_room = 'Great Hall'
print('\nYou are starting in the', current_room)
move = input('What will you do next?\n>').split()
directions = ['North', 'South', 'East', 'West'] # directions in the dictionary
while True:
if len(move) < 2: # for one word inputs
if 'exit' in move: # exit the game
print('\nThanks for playing!')
break
elif 'instructions' in move: # reprint instructions
instruction()
print('------------')
print('\nYou are in the', current_room)
else:
invalid()
print('You are still in the', current_room)
move = input('\nWhat will you do next?\n>').split() # next move input
if len(move) == 2: # 2 word inputs
if move[1] in directions: # checks if move is a valid direction
if move[1] in rooms[current_room]: # if move is a valid direction in current room
current_room = rooms[current_room][move[1]] # changes current room if valid
print('------------')
print('You have found the', current_room)
elif move[1] not in rooms[current_room]: # if move in directions but not a valid move in current room
print('------------')
print('Oh no it seems to be a dead in')
print('You are still in the', current_room)
else: # not a valid move command
invalid()
print('You are still in the', current_room)
move = input('What will you do next?\n>').split() # next move input
else: # invalid move command
invalid()
print('You are still in the', current_room)
move = input('What will you do next?\n>').split()
instruction() # prints instructions function when game runs
print('------------')
if __name__ == '__main__': # if code is not imported than main() will run
main()
A:
After your line
move = input('\nWhat will you do next?\n>').split() # next move input
You should jump back to the beginning of the loop, using continue.
A:
while True:
if len(move) < 2: # for one word inputs
# handle the input in various ways
move = input('\nWhat will you do next?\n>').split() # next move input
if len(move) == 2: # 2 word inputs
When a single-word command is entered, it is handled in the first if block, and then a new move is entered.
The problem is that since the second block is an if and not an elif, it processes the new move immediately.
Generally it's better to ask for user input in only one place, near the top of the loop:
while True:
move = input("...").split()
if len(move) < 2:
# handle it
elif len(move) == 2:
# handle it
else:
# handle it
|
Why is my exit statement not working properly?
|
the goal is to move between rooms in this simplified version of a text base game. The code works exactly as planned except for if you try and input 'exit' directly after inputting 'instructions'. after inputting 'instructions' the first 'exit' get ran in the else invalid statement then the second 'exit' input exits the game as intended. If you continue at least one input after 'instructions' than exit works properly as well.
rooms = {
'Great Hall': {'South': 'Bedroom'},
'Bedroom': {'North': 'Great Hall', 'East': 'Cellar'},
'Cellar': {'West': 'Bedroom'}
}
def instruction():
"""Function to give instructions on how to play the game"""
print('Welcome to Module 6 Milestone')
print('Move commands are go North, go South, go East, go West')
print('Typing exit will exit the game')
print('Inputting instructions will remind you of the game instructions')
print('Good luck may the odds be in your favor')
def invalid():
"""Function for if an invalid input is entered"""
print('------------')
print('Whoops invalid command, try again')
print('------------')
def main():
"""Main function that runs the movement between rooms"""
current_room = 'Great Hall'
print('\nYou are starting in the', current_room)
move = input('What will you do next?\n>').split()
directions = ['North', 'South', 'East', 'West'] # directions in the dictionary
while True:
if len(move) < 2: # for one word inputs
if 'exit' in move: # exit the game
print('\nThanks for playing!')
break
elif 'instructions' in move: # reprint instructions
instruction()
print('------------')
print('\nYou are in the', current_room)
else:
invalid()
print('You are still in the', current_room)
move = input('\nWhat will you do next?\n>').split() # next move input
if len(move) == 2: # 2 word inputs
if move[1] in directions: # checks if move is a valid direction
if move[1] in rooms[current_room]: # if move is a valid direction in current room
current_room = rooms[current_room][move[1]] # changes current room if valid
print('------------')
print('You have found the', current_room)
elif move[1] not in rooms[current_room]: # if move in directions but not a valid move in current room
print('------------')
print('Oh no it seems to be a dead in')
print('You are still in the', current_room)
else: # not a valid move command
invalid()
print('You are still in the', current_room)
move = input('What will you do next?\n>').split() # next move input
else: # invalid move command
invalid()
print('You are still in the', current_room)
move = input('What will you do next?\n>').split()
instruction() # prints instructions function when game runs
print('------------')
if __name__ == '__main__': # if code is not imported than main() will run
main()
|
[
"After your line\nmove = input('\\nWhat will you do next?\\n>').split() # next move input\n\nYou should jump back to the beginning of the loop, using continue.\n",
"while True:\n if len(move) < 2: # for one word inputs\n # handle the input in various ways\n move = input('\\nWhat will you do next?\\n>').split() # next move input\n if len(move) == 2: # 2 word inputs\n\nWhen a single-word command is entered, it is handled in the first if block, and then a new move is entered.\nThe problem is that since the second block is an if and not an elif, it processes the new move immediately.\nGenerally it's better to ask for user input in only one place, near the top of the loop:\nwhile True:\n move = input(\"...\").split()\n if len(move) < 2:\n # handle it\n elif len(move) == 2:\n # handle it\n else:\n # handle it\n \n\n"
] |
[
1,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074661877_python.txt
|
Q:
MUI TablePagination ability to use custom Popper
I would like to be able to use my own custom Popper component that I have built, some of the MUI components such as AutoComplete allow for this by providing the prop PopperComponent however TablePagination doesn't have anything beyond adding a class to their popper.
Does anyone know of a way to utilize a custom Popper component with the MUI TablePagination component?
For quick reference, here are the API docs for each of the components mentioned above
TablePagination - https://mui.com/material-ui/api/table-pagination/#props
AutoComplete - https://mui.com/material-ui/api/autocomplete/#props
A:
The TablePagination component from Material-UI doesn't provide a prop for specifying a custom Popper component, but you could try using the classes prop to override the styles of the default Popper component used by TablePagination.
Here is an example of how you could use the classes prop to apply custom styles to the Popper component used by TablePagination:
import React from 'react';
import { makeStyles } from '@material-ui/core/styles';
import TablePagination from '@material-ui/core/TablePagination';
const useStyles = makeStyles({
popper: {
// Custom styles for the Popper component
}
});
function MyTablePagination(props) {
const classes = useStyles();
return (
<TablePagination
classes={{
popper: classes.popper
}}
{...props}
/>
);
}
In this example, the custom styles for the Popper component are defined in the useStyles hook and then applied to the Popper component using the classes prop. You can then use this custom MyTablePagination component in your app instead of the default TablePagination component.
Alternatively, if you want to use a custom Popper component, you could try using the withStyles higher-order component from Material-UI to create a new TablePagination component that uses your custom Popper component. Here is an example of how you could do that:
import React from 'react';
import { withStyles } from '@material-ui/core/styles';
import TablePagination from '@material-ui/core/TablePagination';
import MyCustomPopper from './MyCustomPopper';
const styles = {
popper: {
// Custom styles for the Popper component
}
};
function MyTablePagination(props) {
const { classes, ...other } = props;
return (
<TablePagination
PopperComponent={MyCustomPopper}
classes={{
popper: classes.popper
}}
{...other}
/>
);
}
export default withStyles(styles)(MyTablePagination);
In this example, the custom Popper component is specified using the PopperComponent prop, and the custom styles are applied using the classes prop and the withStyles higher-order component. You can then use the custom MyTablePagination component in your app in the same way you would use the default TablePagination component.
|
MUI TablePagination ability to use custom Popper
|
I would like to be able to use my own custom Popper component that I have built, some of the MUI components such as AutoComplete allow for this by providing the prop PopperComponent however TablePagination doesn't have anything beyond adding a class to their popper.
Does anyone know of a way to utilize a custom Popper component with the MUI TablePagination component?
For quick reference, here are the API docs for each of the components mentioned above
TablePagination - https://mui.com/material-ui/api/table-pagination/#props
AutoComplete - https://mui.com/material-ui/api/autocomplete/#props
|
[
"The TablePagination component from Material-UI doesn't provide a prop for specifying a custom Popper component, but you could try using the classes prop to override the styles of the default Popper component used by TablePagination.\nHere is an example of how you could use the classes prop to apply custom styles to the Popper component used by TablePagination:\nimport React from 'react';\nimport { makeStyles } from '@material-ui/core/styles';\nimport TablePagination from '@material-ui/core/TablePagination';\n\nconst useStyles = makeStyles({\n popper: {\n // Custom styles for the Popper component\n }\n});\n\nfunction MyTablePagination(props) {\n const classes = useStyles();\n\n return (\n <TablePagination\n classes={{\n popper: classes.popper\n }}\n {...props}\n />\n );\n}\n\n\nIn this example, the custom styles for the Popper component are defined in the useStyles hook and then applied to the Popper component using the classes prop. You can then use this custom MyTablePagination component in your app instead of the default TablePagination component.\nAlternatively, if you want to use a custom Popper component, you could try using the withStyles higher-order component from Material-UI to create a new TablePagination component that uses your custom Popper component. Here is an example of how you could do that:\nimport React from 'react';\nimport { withStyles } from '@material-ui/core/styles';\nimport TablePagination from '@material-ui/core/TablePagination';\nimport MyCustomPopper from './MyCustomPopper';\n\nconst styles = {\n popper: {\n // Custom styles for the Popper component\n }\n};\n\nfunction MyTablePagination(props) {\n const { classes, ...other } = props;\n\n return (\n <TablePagination\n PopperComponent={MyCustomPopper}\n classes={{\n popper: classes.popper\n }}\n {...other}\n />\n );\n}\n\nexport default withStyles(styles)(MyTablePagination);\n\nIn this example, the custom Popper component is specified using the PopperComponent prop, and the custom styles are applied using the classes prop and the withStyles higher-order component. You can then use the custom MyTablePagination component in your app in the same way you would use the default TablePagination component.\n"
] |
[
0
] |
[] |
[] |
[
"material_ui",
"reactjs"
] |
stackoverflow_0074661497_material_ui_reactjs.txt
|
Q:
Mongodb Aggregate Filter Array Of Array Of Array
We would like to filter SKU's List which has verificationData data and differenceInStock difference greater than or Less than 0
Here is an example Data Set.
[
{
"_id": "636e0beaa13ef73324e613f0",
"status": "ACTIVE",
"inventory": 132,
"parentCategory": [
"Salt"
],
"title": "Aashirvaad MRP: 28Rs Salt 27 kg Bopp Bag (Set of 1 kg x 27)",
"createdAt": "2022-11-11T08:46:34.950Z",
"updatedAt": "2022-11-24T17:43:27.361Z",
"__v": 3,
"verificationData": [
{
"_id": "637c57ebbe783a9a138fc2d3",
"verificationDate": "2022-11-22T05:02:35.155Z",
"items": {
"listingId": "636e0beaa13ef73324e613f0",
"phyiscalVerification": [
{
"verifiedBy": "634534e72ef6462fcb681a39",
"closingStock": 178,
"phyiscalStock": 178,
"differenceInStock": 0,
"verifiedAt": "2022-11-22T10:19:38.388Z",
"_id": "637ca23abe783a9a1394f402"
}
],
"_id": "637ca23abe783a9a1394f401"
},
"yearMonthDayUTC": "2022-11-22"
},
{
"_id": "637d9b65be783a9a13998726",
"verificationDate": "2022-11-23T04:02:45.804Z",
"items": {
"listingId": "636e0beaa13ef73324e613f0",
"phyiscalVerification": [
{
"verifiedBy": "634534e72ef6462fcb681a39",
"closingStock": 161,
"phyiscalStock": 167,
"differenceInStock": 6,
"verifiedAt": "2022-11-23T09:52:36.815Z",
"_id": "637ded64be783a9a13a29d55"
}
],
"_id": "637ded64be783a9a13a29d54"
},
"yearMonthDayUTC": "2022-11-23"
},
{
"_id": "637f0254be783a9a13a94354",
"verificationDate": "2022-11-24T05:34:12.995Z",
"items": {
"listingId": "636e0beaa13ef73324e613f0",
"phyiscalVerification": [
{
"verifiedBy": "634534e72ef6462fcb681a39",
"closingStock": 144,
"phyiscalStock": 146,
"differenceInStock": 2,
"verifiedAt": "2022-11-24T12:02:28.123Z",
"_id": "637f5d54be783a9a13b1039a"
}
],
"_id": "637f5d54be783a9a13b10399"
},
"yearMonthDayUTC": "2022-11-24"
},
{
"_id": "2022-11-25",
"yearMonthDayUTC": "2022-11-25",
"items": null
}
]
},
{
"_id": "62b5c39062ddb963fc64c42d",
"status": "ACTIVE",
"inventory": 10,
"parentCategory": [
"Salt"
],
"finalMeasurementUnit": "kg",
"finalMeasure": "1 kg",
"title": "Marvella Citric Acid Lemon Salt 1 kg Pouch (Set of 500 gm x 2)",
"createdAt": "2022-06-24T14:00:49.052Z",
"updatedAt": "2022-11-21T11:04:21.643Z",
"__v": 2,
"verificationData": [
{
"_id": "2022-11-22",
"yearMonthDayUTC": "2022-11-22",
"items": null
},
{
"_id": "2022-11-23",
"yearMonthDayUTC": "2022-11-23",
"items": null
},
{
"_id": "2022-11-24",
"yearMonthDayUTC": "2022-11-24",
"items": null
},
{
"_id": "2022-11-25",
"yearMonthDayUTC": "2022-11-25",
"items": null
}
]
}
]
This could have array of 100+ SKU's
Our Aggregate Functions is as Follows
let reqData = await userListing.aggregate([
{
$match: {
warehouseId: { $eq: ObjectId(warehouseId) },
parentCategory: { $in: catList },
isWarehouseListing: { $eq: true },
isBlocked: { $ne: true },
isArchived: { $ne: true },
},
},
{ $sort: { whAddedAt: -1 } },
{
$lookup: {
from: "listingstockverifications",
let: { listId: "$_id" },
pipeline: [
{
$match: {
verificationDate: {
$gte: newFromDate,
$lt: newToDate,
},
},
},
{
$project: {
verificationDate: 1,
items: {
$filter: {
input: "$items",
cond: {
$and: [
/* {
"$$this.phyiscalVerification": {
$filter: {
input: "$$this.phyiscalVerification",
as: "psitem",
cond: { $gt: [ "$$psitem.differenceInStock", 0 ] },
},
},
}, */
{
$eq: ["$$this.listingId", "$$listId"],
},
],
},
},
},
yearMonthDayUTC: {
$dateToString: {
format: "%Y-%m-%d",
date: "$verificationDate",
},
},
},
},
{ $unwind: "$items" },
],
as: "stockVerification",
},
},
{
$addFields: {
verificationData: {
$map: {
input: dummyArray,
as: "date",
in: {
$let: {
vars: {
dateIndex: {
$indexOfArray: [
"$stockVerification.yearMonthDayUTC",
"$$date",
],
},
},
in: {
$cond: {
if: { $ne: ["$$dateIndex", -1] },
then: {
$arrayElemAt: ["$stockVerification", "$$dateIndex"],
},
else: {
_id: "$$date",
yearMonthDayUTC: "$$date",
items: null,
},
},
},
},
},
},
},
},
},
{
$project: {
stockVerification: 0,
},
},
]);
At Last now we would like to filter the SKU List the which has following Data
verificationData[].items.phyiscalVerification[].differenceInStock is Greater than or Less than 0
Expected Output in the following Exmaple would be 1st SKUs
as 2nd SKU does not have any Item Data
and even if in 3rd SKU if we got Item Data but should match the following condition
verificationData[].items.phyiscalVerification[].differenceInStock is Greater than or Less than 0
Thank you for taking your time to read and support.
A:
You can add these two following stages to your aggregation, The idea is simple - just filter out all subdocuments that do not match the condition.
Because of the nested structure it's just not the sexiest of pipelines but it will suffice.
db.collection.aggregate([
{
$match: {
$or: [
{
"verificationData.items.phyiscalVerification.differenceInStock": {
$gt: 0
}
},
{
"verificationData.items.phyiscalVerification.differenceInStock": {
$lt: 0
}
}
]
}
},
{
$addFields: {
verificationData: {
$filter: {
input: {
$map: {
input: {
$filter: {
input: "$verificationData",
as: "verification",
cond: {
$ne: [
"$$verification.items",
null
]
}
}
},
as: "top",
in: {
$mergeObjects: [
"$$top",
{
"items": {
"$mergeObjects": [
"$$top.items",
{
phyiscalVerification: {
$filter: {
input: "$$top.items.phyiscalVerification",
as: "pshycical",
cond: {
$ne: [
"$$pshycical.differenceInStock",
0
]
}
}
}
}
]
}
}
]
}
}
},
cond: {
$gt: [
{
$size: "$$this.items.phyiscalVerification"
},
0
]
}
}
}
}
}
])
Mongo Playground
A:
To filter a list of SKUs in MongoDB based on the presence of a field and a numeric comparison, you can use the $exists operator to check for the presence of a field and the $gt (greater than) or $lt (less than) operators to compare numeric values. For example, you could use the following query to filter SKUs that have a verificationData field and a differenceInStock value that is greater than 0:
db.collection.find({
verificationData: { $exists: true },
"verificationData.items.phyiscalVerification.differenceInStock": { $gt: 0 }
})
This query will match all documents in the collection that have a verificationData field and a differenceInStock field within the phyiscalVerification array that is greater than 0. To match SKUs with a differenceInStock value that is less than 0, you can use the $lt operator instead:
db.collection.find({
verificationData: { $exists: true },
"verificationData.items.phyiscalVerification.differenceInStock": { $lt: 0 }
})
If you want to return only the specific fields specified in the query, you can use the projection argument of the find() method to specify which fields you want to include in the returned documents. For example:
// Query to return only the SKU and differenceInStock fields
db.skus.find(
{
verificationData: { $exists: true },
"verificationData.items.phyiscalVerification.differenceInStock": { $gt: 0 }
},
{
_id: 0,
SKU: 1,
"verificationData.items.phyiscalVerification.differenceInStock": 1
}
)
This will return only the SKU and differenceInStock fields for the documents that match the query conditions. You can adjust the projection to include or exclude other fields as needed.
|
Mongodb Aggregate Filter Array Of Array Of Array
|
We would like to filter SKU's List which has verificationData data and differenceInStock difference greater than or Less than 0
Here is an example Data Set.
[
{
"_id": "636e0beaa13ef73324e613f0",
"status": "ACTIVE",
"inventory": 132,
"parentCategory": [
"Salt"
],
"title": "Aashirvaad MRP: 28Rs Salt 27 kg Bopp Bag (Set of 1 kg x 27)",
"createdAt": "2022-11-11T08:46:34.950Z",
"updatedAt": "2022-11-24T17:43:27.361Z",
"__v": 3,
"verificationData": [
{
"_id": "637c57ebbe783a9a138fc2d3",
"verificationDate": "2022-11-22T05:02:35.155Z",
"items": {
"listingId": "636e0beaa13ef73324e613f0",
"phyiscalVerification": [
{
"verifiedBy": "634534e72ef6462fcb681a39",
"closingStock": 178,
"phyiscalStock": 178,
"differenceInStock": 0,
"verifiedAt": "2022-11-22T10:19:38.388Z",
"_id": "637ca23abe783a9a1394f402"
}
],
"_id": "637ca23abe783a9a1394f401"
},
"yearMonthDayUTC": "2022-11-22"
},
{
"_id": "637d9b65be783a9a13998726",
"verificationDate": "2022-11-23T04:02:45.804Z",
"items": {
"listingId": "636e0beaa13ef73324e613f0",
"phyiscalVerification": [
{
"verifiedBy": "634534e72ef6462fcb681a39",
"closingStock": 161,
"phyiscalStock": 167,
"differenceInStock": 6,
"verifiedAt": "2022-11-23T09:52:36.815Z",
"_id": "637ded64be783a9a13a29d55"
}
],
"_id": "637ded64be783a9a13a29d54"
},
"yearMonthDayUTC": "2022-11-23"
},
{
"_id": "637f0254be783a9a13a94354",
"verificationDate": "2022-11-24T05:34:12.995Z",
"items": {
"listingId": "636e0beaa13ef73324e613f0",
"phyiscalVerification": [
{
"verifiedBy": "634534e72ef6462fcb681a39",
"closingStock": 144,
"phyiscalStock": 146,
"differenceInStock": 2,
"verifiedAt": "2022-11-24T12:02:28.123Z",
"_id": "637f5d54be783a9a13b1039a"
}
],
"_id": "637f5d54be783a9a13b10399"
},
"yearMonthDayUTC": "2022-11-24"
},
{
"_id": "2022-11-25",
"yearMonthDayUTC": "2022-11-25",
"items": null
}
]
},
{
"_id": "62b5c39062ddb963fc64c42d",
"status": "ACTIVE",
"inventory": 10,
"parentCategory": [
"Salt"
],
"finalMeasurementUnit": "kg",
"finalMeasure": "1 kg",
"title": "Marvella Citric Acid Lemon Salt 1 kg Pouch (Set of 500 gm x 2)",
"createdAt": "2022-06-24T14:00:49.052Z",
"updatedAt": "2022-11-21T11:04:21.643Z",
"__v": 2,
"verificationData": [
{
"_id": "2022-11-22",
"yearMonthDayUTC": "2022-11-22",
"items": null
},
{
"_id": "2022-11-23",
"yearMonthDayUTC": "2022-11-23",
"items": null
},
{
"_id": "2022-11-24",
"yearMonthDayUTC": "2022-11-24",
"items": null
},
{
"_id": "2022-11-25",
"yearMonthDayUTC": "2022-11-25",
"items": null
}
]
}
]
This could have array of 100+ SKU's
Our Aggregate Functions is as Follows
let reqData = await userListing.aggregate([
{
$match: {
warehouseId: { $eq: ObjectId(warehouseId) },
parentCategory: { $in: catList },
isWarehouseListing: { $eq: true },
isBlocked: { $ne: true },
isArchived: { $ne: true },
},
},
{ $sort: { whAddedAt: -1 } },
{
$lookup: {
from: "listingstockverifications",
let: { listId: "$_id" },
pipeline: [
{
$match: {
verificationDate: {
$gte: newFromDate,
$lt: newToDate,
},
},
},
{
$project: {
verificationDate: 1,
items: {
$filter: {
input: "$items",
cond: {
$and: [
/* {
"$$this.phyiscalVerification": {
$filter: {
input: "$$this.phyiscalVerification",
as: "psitem",
cond: { $gt: [ "$$psitem.differenceInStock", 0 ] },
},
},
}, */
{
$eq: ["$$this.listingId", "$$listId"],
},
],
},
},
},
yearMonthDayUTC: {
$dateToString: {
format: "%Y-%m-%d",
date: "$verificationDate",
},
},
},
},
{ $unwind: "$items" },
],
as: "stockVerification",
},
},
{
$addFields: {
verificationData: {
$map: {
input: dummyArray,
as: "date",
in: {
$let: {
vars: {
dateIndex: {
$indexOfArray: [
"$stockVerification.yearMonthDayUTC",
"$$date",
],
},
},
in: {
$cond: {
if: { $ne: ["$$dateIndex", -1] },
then: {
$arrayElemAt: ["$stockVerification", "$$dateIndex"],
},
else: {
_id: "$$date",
yearMonthDayUTC: "$$date",
items: null,
},
},
},
},
},
},
},
},
},
{
$project: {
stockVerification: 0,
},
},
]);
At Last now we would like to filter the SKU List the which has following Data
verificationData[].items.phyiscalVerification[].differenceInStock is Greater than or Less than 0
Expected Output in the following Exmaple would be 1st SKUs
as 2nd SKU does not have any Item Data
and even if in 3rd SKU if we got Item Data but should match the following condition
verificationData[].items.phyiscalVerification[].differenceInStock is Greater than or Less than 0
Thank you for taking your time to read and support.
|
[
"You can add these two following stages to your aggregation, The idea is simple - just filter out all subdocuments that do not match the condition.\nBecause of the nested structure it's just not the sexiest of pipelines but it will suffice.\ndb.collection.aggregate([\n {\n $match: {\n $or: [\n {\n \"verificationData.items.phyiscalVerification.differenceInStock\": {\n $gt: 0\n }\n },\n {\n \"verificationData.items.phyiscalVerification.differenceInStock\": {\n $lt: 0\n }\n }\n ]\n }\n },\n {\n $addFields: {\n verificationData: {\n $filter: {\n input: {\n $map: {\n input: {\n $filter: {\n input: \"$verificationData\",\n as: \"verification\",\n cond: {\n $ne: [\n \"$$verification.items\",\n null\n ]\n }\n }\n },\n as: \"top\",\n in: {\n $mergeObjects: [\n \"$$top\",\n {\n \"items\": {\n \"$mergeObjects\": [\n \"$$top.items\",\n {\n phyiscalVerification: {\n $filter: {\n input: \"$$top.items.phyiscalVerification\",\n as: \"pshycical\",\n cond: {\n $ne: [\n \"$$pshycical.differenceInStock\",\n 0\n ]\n }\n }\n }\n }\n ]\n }\n }\n ]\n }\n }\n },\n cond: {\n $gt: [\n {\n $size: \"$$this.items.phyiscalVerification\"\n },\n 0\n ]\n }\n }\n }\n }\n }\n])\n\nMongo Playground\n",
"To filter a list of SKUs in MongoDB based on the presence of a field and a numeric comparison, you can use the $exists operator to check for the presence of a field and the $gt (greater than) or $lt (less than) operators to compare numeric values. For example, you could use the following query to filter SKUs that have a verificationData field and a differenceInStock value that is greater than 0:\ndb.collection.find({\n verificationData: { $exists: true },\n \"verificationData.items.phyiscalVerification.differenceInStock\": { $gt: 0 }\n})\n\nThis query will match all documents in the collection that have a verificationData field and a differenceInStock field within the phyiscalVerification array that is greater than 0. To match SKUs with a differenceInStock value that is less than 0, you can use the $lt operator instead:\ndb.collection.find({\n verificationData: { $exists: true },\n \"verificationData.items.phyiscalVerification.differenceInStock\": { $lt: 0 }\n})\n\nIf you want to return only the specific fields specified in the query, you can use the projection argument of the find() method to specify which fields you want to include in the returned documents. For example:\n// Query to return only the SKU and differenceInStock fields\ndb.skus.find(\n {\n verificationData: { $exists: true },\n \"verificationData.items.phyiscalVerification.differenceInStock\": { $gt: 0 }\n },\n {\n _id: 0,\n SKU: 1,\n \"verificationData.items.phyiscalVerification.differenceInStock\": 1\n }\n)\n\nThis will return only the SKU and differenceInStock fields for the documents that match the query conditions. You can adjust the projection to include or exclude other fields as needed.\n"
] |
[
0,
0
] |
[] |
[] |
[
"aggregation_framework",
"mongodb",
"mongodb_query",
"mongoose"
] |
stackoverflow_0074565948_aggregation_framework_mongodb_mongodb_query_mongoose.txt
|
Q:
In Nest js, I added Redis as a cache manager. And can't find any added data in Redis after calling the set function. So, am I missing something?
Node version: v14.15.4
Nest-js version: 9.0.0
app.module.ts
Here is the code.
In the app module, I am registering Redis as a cache manager.
@Module({
imports: [
CacheModule.register({
isGlobal: true,
store: redisStore,
url: process.env.REDIS_URL,
})
],
controllers: [AppController],
providers: [AppService],
})
export class AppModule {}
service.ts
The cache data method is for storing data with a key. -> the problem is the set function doesn't save anything
And get Data for returning the data by key.
@Injectable()
export class SomeService {
constructor(@Inject(CACHE_MANAGER) private cacheManager: Cache) {}
async cacheData(key: string, data): Promise<void> {
await this.cacheManager.set(key, data);
}
async getData(key: string, data): Promise<any> {
return this.cacheManager.get(key);
}
}
It doesn't throw any error in runtime.
A:
The default expiration time of the cache is 5 seconds.
To disable expiration of the cache, set the ttl configuration property to 0:
await this.cacheManager.set('key', 'value', { ttl: 0 });
A:
i has met the same problem as you,the way to fix this is using the install cmd to change the version of cache-manager-redis-store to 2.0.0 like 'npm i [email protected]'
when you finish this step, the use of redisStore can be found,then the database can be linked.
A:
I am not sure why your getData function returns Promise<void> have you tried returning Promise<any> or the data type you are expecting e.g. Promise<string>. You could also try adding await.
async getData(key: string, data): Promise<any> {
return await this.cacheManager.get(key);
}
Are you sure that you are connecting to redis successfully ?. Have you tried adding password and tls (true/false) configuration ?
@Module({
imports: [
CacheModule.register({
isGlobal: true,
store: redisStore,
url: process.env.REDIS_URL,
password: process.env.REDIS_PASSWORD,
tls: process.env.REDIS_TLS
})
],
controllers: [AppController],
providers: [AppService],
})
export class AppModule {}
A:
I had the same problem.
It looks like NestJS v9 incompatible with version 5 of cache-manager. So you need to downgrade to v4 for now, until this issue is resolved: https://github.com/node-cache-manager/node-cache-manager/issues/210
Change your package.json to have this in the dependencies:
"cache-manager": "^4.0.0",
Another commenter also suggested lowering the redis cache version.
|
In Nest js, I added Redis as a cache manager. And can't find any added data in Redis after calling the set function. So, am I missing something?
|
Node version: v14.15.4
Nest-js version: 9.0.0
app.module.ts
Here is the code.
In the app module, I am registering Redis as a cache manager.
@Module({
imports: [
CacheModule.register({
isGlobal: true,
store: redisStore,
url: process.env.REDIS_URL,
})
],
controllers: [AppController],
providers: [AppService],
})
export class AppModule {}
service.ts
The cache data method is for storing data with a key. -> the problem is the set function doesn't save anything
And get Data for returning the data by key.
@Injectable()
export class SomeService {
constructor(@Inject(CACHE_MANAGER) private cacheManager: Cache) {}
async cacheData(key: string, data): Promise<void> {
await this.cacheManager.set(key, data);
}
async getData(key: string, data): Promise<any> {
return this.cacheManager.get(key);
}
}
It doesn't throw any error in runtime.
|
[
"The default expiration time of the cache is 5 seconds.\nTo disable expiration of the cache, set the ttl configuration property to 0:\nawait this.cacheManager.set('key', 'value', { ttl: 0 });\n\n",
"i has met the same problem as you,the way to fix this is using the install cmd to change the version of cache-manager-redis-store to 2.0.0 like 'npm i [email protected]'\nwhen you finish this step, the use of redisStore can be found,then the database can be linked.\n",
"I am not sure why your getData function returns Promise<void> have you tried returning Promise<any> or the data type you are expecting e.g. Promise<string>. You could also try adding await.\n\n\n async getData(key: string, data): Promise<any> {\n return await this.cacheManager.get(key);\n }\n\n\n\nAre you sure that you are connecting to redis successfully ?. Have you tried adding password and tls (true/false) configuration ?\n\n\n@Module({\n imports: [\n CacheModule.register({\n isGlobal: true,\n store: redisStore,\n url: process.env.REDIS_URL,\n password: process.env.REDIS_PASSWORD,\n tls: process.env.REDIS_TLS\n })\n ],\n controllers: [AppController],\n providers: [AppService],\n})\nexport class AppModule {}\n\n\n\n",
"I had the same problem.\nIt looks like NestJS v9 incompatible with version 5 of cache-manager. So you need to downgrade to v4 for now, until this issue is resolved: https://github.com/node-cache-manager/node-cache-manager/issues/210\nChange your package.json to have this in the dependencies:\n \"cache-manager\": \"^4.0.0\",\nAnother commenter also suggested lowering the redis cache version.\n"
] |
[
1,
1,
0,
0
] |
[] |
[] |
[
"javascript",
"nestjs",
"nestjs_config",
"node.js",
"redis"
] |
stackoverflow_0073213405_javascript_nestjs_nestjs_config_node.js_redis.txt
|
Q:
How can administrators of WebSite can turn of CloudFlare for me?
We’re organization, which trying to archive russian independent media. But some websites block acces via Cloudflare. Can we get permission from the media to access their site so that Cloudflare doesn’t block us?
Now we're trying to archive this websites using selenium but this is not comfortable for us
A:
Cloudflare configured from the that "media" side, you can't do anything with that without getting permissions from that media (you can try to connect with them via e-mail for example).
Or, you can always use VPN for getting access.
|
How can administrators of WebSite can turn of CloudFlare for me?
|
We’re organization, which trying to archive russian independent media. But some websites block acces via Cloudflare. Can we get permission from the media to access their site so that Cloudflare doesn’t block us?
Now we're trying to archive this websites using selenium but this is not comfortable for us
|
[
"Cloudflare configured from the that \"media\" side, you can't do anything with that without getting permissions from that media (you can try to connect with them via e-mail for example).\nOr, you can always use VPN for getting access.\n"
] |
[
0
] |
[] |
[] |
[
"archive",
"cloudflare"
] |
stackoverflow_0074585138_archive_cloudflare.txt
|
Q:
Pass Criteria from Excel to VBA function call
Background
I've a VBA function which takes a range of cells and does some operation on it and returns a value to be filled in cell.
Function ProcessCells(sheetName As String, ParamArray MyParams() As Variant) As String
..
End Function
This takes sheetName and MyParams as input. MyParams is a pair-wise list of argument containing columnIndex and filtertoApply on it.
Example Usage in my excel sheet:
=ProcessCells("my-sheet-1", "CarModel", "Honda", "City", "Vancouver")
This function call looks at data in sheet-1 filters rows where CarModel=Honda and City=Vancouver and then does some processing on the rows
Problem
Instead of passing a value like "Honda" or "Vancouver" to perform filtering, I want to provide users to be able to pass a filter function from excel formula which I can run in my VBA function to filter out before processing.
Something like :
=ProcessCells("my-sheet-1", "CarModel", Val <> "Honda", "City", Value_in_List("Vancouver", "Vicotria"))
Similar to in built CountIf function which take Range and criteria :
=COUNTIF(Where do you want to look?, What do you want to look for?)
https://support.microsoft.com/en-us/office/countif-function-e0de10c6-f885-4e71-abb4-1f464816df34
A:
Please, try calling it as:
=ProcessCells("my-sheet-1", "CarModel", "<>Honda", "City", {"Vancouver", "Vicotria"})
and use the second ParamArray element as filtering criteria for the first one column ("CarModel") and fourth like criteria array for filtering by the third ("City") column...
|
Pass Criteria from Excel to VBA function call
|
Background
I've a VBA function which takes a range of cells and does some operation on it and returns a value to be filled in cell.
Function ProcessCells(sheetName As String, ParamArray MyParams() As Variant) As String
..
End Function
This takes sheetName and MyParams as input. MyParams is a pair-wise list of argument containing columnIndex and filtertoApply on it.
Example Usage in my excel sheet:
=ProcessCells("my-sheet-1", "CarModel", "Honda", "City", "Vancouver")
This function call looks at data in sheet-1 filters rows where CarModel=Honda and City=Vancouver and then does some processing on the rows
Problem
Instead of passing a value like "Honda" or "Vancouver" to perform filtering, I want to provide users to be able to pass a filter function from excel formula which I can run in my VBA function to filter out before processing.
Something like :
=ProcessCells("my-sheet-1", "CarModel", Val <> "Honda", "City", Value_in_List("Vancouver", "Vicotria"))
Similar to in built CountIf function which take Range and criteria :
=COUNTIF(Where do you want to look?, What do you want to look for?)
https://support.microsoft.com/en-us/office/countif-function-e0de10c6-f885-4e71-abb4-1f464816df34
|
[
"Please, try calling it as:\n=ProcessCells(\"my-sheet-1\", \"CarModel\", \"<>Honda\", \"City\", {\"Vancouver\", \"Vicotria\"})\n\nand use the second ParamArray element as filtering criteria for the first one column (\"CarModel\") and fourth like criteria array for filtering by the third (\"City\") column...\n"
] |
[
0
] |
[] |
[] |
[
"excel",
"vba"
] |
stackoverflow_0074661393_excel_vba.txt
|
Q:
TailwindCss Not Responsive to Screen Size
App.js
function App() {
return (
<div className="sm:text-center">
<h1>
Hello
</h1>
</div>
);
}
export default App;
tailwind.js
/** @type {import('tailwindcss').Config} */
module.exports = {
content: [],
presets: [],
darkMode: 'media', // or 'class'
theme: {
screens: {
sm: '640px',
md: '768px',
lg: '1024px',
xl: '1280px',
'2xl': '1536px',
},
I'm expecting for the text to be aligned to the left by default and on screens 640px and above text should be centered. However the text stays aligned to the left. I do have the html meta tag for responsiveness on my index.html file '
Any help is appreciated!
A:
In this case it only activates when its 640px and below. Plus your small media query should be around 360px
A:
The problem is because I left the content array empty. I had to configure the path to any file that contained tailwind class names.
Which is a little weird because other class names seemed to work just not the breakpoints. But that did solve the problem.
|
TailwindCss Not Responsive to Screen Size
|
App.js
function App() {
return (
<div className="sm:text-center">
<h1>
Hello
</h1>
</div>
);
}
export default App;
tailwind.js
/** @type {import('tailwindcss').Config} */
module.exports = {
content: [],
presets: [],
darkMode: 'media', // or 'class'
theme: {
screens: {
sm: '640px',
md: '768px',
lg: '1024px',
xl: '1280px',
'2xl': '1536px',
},
I'm expecting for the text to be aligned to the left by default and on screens 640px and above text should be centered. However the text stays aligned to the left. I do have the html meta tag for responsiveness on my index.html file '
Any help is appreciated!
|
[
"In this case it only activates when its 640px and below. Plus your small media query should be around 360px\n",
"The problem is because I left the content array empty. I had to configure the path to any file that contained tailwind class names.\nWhich is a little weird because other class names seemed to work just not the breakpoints. But that did solve the problem.\n"
] |
[
0,
0
] |
[] |
[] |
[
"reactjs",
"tailwind_css"
] |
stackoverflow_0074646722_reactjs_tailwind_css.txt
|
Q:
In a Categorical Variable (17 unique dummy variables), I want to select 10 unique variables which has highest frequency
I have a categorical variable name city which has 17 different cities' names. The total number of rows in the data is 228. I want to select only 10 cities that have the highest frequency in the data.
A:
Here is a solution, with reproducible data:
#Load in library:
library(dplyr)
#Use storms, a data set that comes with dplyr:
our_data <- storms
#Select the column that you want to judge the frequency of each value of:
our_data_column_of_interest <- our_data %>% dplyr::select(name)
#Make a data frame that is the top 10 most common occurences in that colun:
top_10 <- as.data.frame(table(our_data_column_of_interest)) %>%
arrange(desc(Freq)) %>%
slice(1:10) %>%
dplyr::select(name) %>%
as_tibble()
#This makes a table out of our column, which summarizes the data by occurence, #arranges it by descending frequency, selects the first 10 rows, and then selects #just the information of interest, and makes it back into a tibble.
#Now, filter for values in our original dataset based on the top 10 values that we created:
final_data <- our_data %>% filter(our_data$name %in% top_10$name)
|
In a Categorical Variable (17 unique dummy variables), I want to select 10 unique variables which has highest frequency
|
I have a categorical variable name city which has 17 different cities' names. The total number of rows in the data is 228. I want to select only 10 cities that have the highest frequency in the data.
|
[
"Here is a solution, with reproducible data:\n#Load in library:\n\nlibrary(dplyr)\n\n#Use storms, a data set that comes with dplyr:\n\nour_data <- storms\n\n#Select the column that you want to judge the frequency of each value of:\n\nour_data_column_of_interest <- our_data %>% dplyr::select(name)\n\n#Make a data frame that is the top 10 most common occurences in that colun:\n\ntop_10 <- as.data.frame(table(our_data_column_of_interest)) %>% \n arrange(desc(Freq)) %>% \n slice(1:10) %>% \n dplyr::select(name) %>% \n as_tibble()\n\n#This makes a table out of our column, which summarizes the data by occurence, #arranges it by descending frequency, selects the first 10 rows, and then selects #just the information of interest, and makes it back into a tibble. \n\n#Now, filter for values in our original dataset based on the top 10 values that we created:\n\n\nfinal_data <- our_data %>% filter(our_data$name %in% top_10$name)\n\n"
] |
[
0
] |
[] |
[] |
[
"r"
] |
stackoverflow_0074661418_r.txt
|
Q:
How to combine streams in anyio?
How to iterate over multiple steams at once in anyio, interleaving the items as they appear?
Let's say, I want a simple equivalent of annotate-output. The simplest I could make is
#!/usr/bin/env python3
import dataclasses
from collections.abc import Sequence
from typing import TypeVar
import anyio
import anyio.abc
import anyio.streams.text
SCRIPT = r"""
for idx in $(seq 1 5); do
printf "%s " "$idx"
date -Ins
sleep 0.08
done
echo "."
"""
CMD = ["bash", "-x", "-c", SCRIPT]
def print_data(data: str, is_stderr: bool) -> None:
print(f"{int(is_stderr)}: {data!r}")
T_Item = TypeVar("T_Item") # TODO: covariant=True?
@dataclasses.dataclass(eq=False)
class CombinedReceiveStream(anyio.abc.ObjectReceiveStream[tuple[int, T_Item]]):
"""Combines multiple streams into a single one, annotating each item with position index of the origin stream"""
streams: Sequence[anyio.abc.ObjectReceiveStream[T_Item]]
max_buffer_size_items: int = 32
def __post_init__(self) -> None:
self._queue_send, self._queue_receive = anyio.create_memory_object_stream(
max_buffer_size=self.max_buffer_size_items,
# Should be: `item_type=tuple[int, T_Item] | None`
)
self._pending = set(range(len(self.streams)))
self._started = False
self._task_group = anyio.create_task_group()
async def _copier(self, idx: int) -> None:
assert idx in self._pending
stream = self.streams[idx]
async for item in stream:
await self._queue_send.send((idx, item))
assert idx in self._pending
self._pending.remove(idx)
await self._queue_send.send(None) # Wake up the `receive` waiters, if any.
async def _start(self) -> None:
assert not self._started
await self._task_group.__aenter__()
for idx in range(len(self.streams)):
self._task_group.start_soon(self._copier, idx, name=f"_combined_receive_copier_{idx}")
self._started = True
async def receive(self) -> tuple[int, T_Item]:
if not self._started:
await self._start()
# Non-blocking pre-check.
# Gathers items that are in the queue when `self._pending` is empty.
try:
item = self._queue_receive.receive_nowait()
except anyio.WouldBlock:
pass
else:
if item is not None:
return item
while True:
if not self._pending:
raise anyio.EndOfStream
item = await self._queue_receive.receive()
if item is not None:
return item
async def aclose(self) -> None:
if self._started:
self._task_group.cancel_scope.cancel()
self._started = False
await self._task_group.__aexit__(None, None, None)
async def amain(max_buffer_size_items: int = 32) -> None:
async with await anyio.open_process(CMD) as proc:
assert proc.stdout is not None
assert proc.stderr is not None
raw_streams = [proc.stdout, proc.stderr]
idx_to_is_stderr = {0: False, 1: True} # just making it explicit
streams = [anyio.streams.text.TextReceiveStream(stream) for stream in raw_streams]
async with CombinedReceiveStream(streams) as outputs:
async for idx, data in outputs:
is_stderr = idx_to_is_stderr[idx]
print_data(data, is_stderr=is_stderr)
def main():
anyio.run(amain)
if __name__ == "__main__":
main()
However, this CombinedReceiveStream solution is somewhat ugly, and I would some solution should already exist. What am I overlooking?
A:
This should be more safe and idiomatic.
class CtxObj:
"""
Add an async context manager that calls `_ctx` to run the context.
Usage::
class Foo(CtxObj):
@asynccontextmanager
async def _ctx(self):
yield self # or whatever
async with Foo() as self_or_whatever:
pass
"""
async def __aenter__(self):
self.__ctx = ctx = self._ctx() # pylint: disable=E1101,W0201
return await ctx.__aenter__()
def __aexit__(self, *tb):
return self.__ctx.__aexit__(*tb)
@dataclasses.dataclass(eq=False)
class CombinedReceiveStream(CtxObj):
"""Combines multiple streams into a single one, annotating each item with position index of the origin stream"""
streams: Sequence[anyio.abc.ObjectReceiveStream[T_Item]]
max_buffer_size_items: int = 32
def __post_init__(self) -> None:
self._queue_send, self._queue_receive = anyio.create_memory_object_stream(
max_buffer_size=self.max_buffer_size_items,
# Should be: `item_type=tuple[int, T_Item] | None`
)
self._pending = set(range(len(self.streams)))
@asynccontextmanager
async def _ctx(self):
async with anyio.create_task_group() as tg:
for i in self._pending:
tg.start_soon(self._copier, i)
yield self
tg.cancel_scope.cancel()
async def _copier(self, idx: int) -> None:
stream = self.streams[idx]
async for item in stream:
await self._queue_send.send((idx, item))
self._pending.remove(idx)
if not self._pending:
await self._queue_send.aclose()
async def receive(self) -> tuple[int, T_Item]:
return await self._queue_receive.receive()
def __aiter__(self):
return self
async def __anext__(self):
try:
return await self.receive()
except anyio.EndOfStream:
raise StopAsyncIteration() from None
|
How to combine streams in anyio?
|
How to iterate over multiple steams at once in anyio, interleaving the items as they appear?
Let's say, I want a simple equivalent of annotate-output. The simplest I could make is
#!/usr/bin/env python3
import dataclasses
from collections.abc import Sequence
from typing import TypeVar
import anyio
import anyio.abc
import anyio.streams.text
SCRIPT = r"""
for idx in $(seq 1 5); do
printf "%s " "$idx"
date -Ins
sleep 0.08
done
echo "."
"""
CMD = ["bash", "-x", "-c", SCRIPT]
def print_data(data: str, is_stderr: bool) -> None:
print(f"{int(is_stderr)}: {data!r}")
T_Item = TypeVar("T_Item") # TODO: covariant=True?
@dataclasses.dataclass(eq=False)
class CombinedReceiveStream(anyio.abc.ObjectReceiveStream[tuple[int, T_Item]]):
"""Combines multiple streams into a single one, annotating each item with position index of the origin stream"""
streams: Sequence[anyio.abc.ObjectReceiveStream[T_Item]]
max_buffer_size_items: int = 32
def __post_init__(self) -> None:
self._queue_send, self._queue_receive = anyio.create_memory_object_stream(
max_buffer_size=self.max_buffer_size_items,
# Should be: `item_type=tuple[int, T_Item] | None`
)
self._pending = set(range(len(self.streams)))
self._started = False
self._task_group = anyio.create_task_group()
async def _copier(self, idx: int) -> None:
assert idx in self._pending
stream = self.streams[idx]
async for item in stream:
await self._queue_send.send((idx, item))
assert idx in self._pending
self._pending.remove(idx)
await self._queue_send.send(None) # Wake up the `receive` waiters, if any.
async def _start(self) -> None:
assert not self._started
await self._task_group.__aenter__()
for idx in range(len(self.streams)):
self._task_group.start_soon(self._copier, idx, name=f"_combined_receive_copier_{idx}")
self._started = True
async def receive(self) -> tuple[int, T_Item]:
if not self._started:
await self._start()
# Non-blocking pre-check.
# Gathers items that are in the queue when `self._pending` is empty.
try:
item = self._queue_receive.receive_nowait()
except anyio.WouldBlock:
pass
else:
if item is not None:
return item
while True:
if not self._pending:
raise anyio.EndOfStream
item = await self._queue_receive.receive()
if item is not None:
return item
async def aclose(self) -> None:
if self._started:
self._task_group.cancel_scope.cancel()
self._started = False
await self._task_group.__aexit__(None, None, None)
async def amain(max_buffer_size_items: int = 32) -> None:
async with await anyio.open_process(CMD) as proc:
assert proc.stdout is not None
assert proc.stderr is not None
raw_streams = [proc.stdout, proc.stderr]
idx_to_is_stderr = {0: False, 1: True} # just making it explicit
streams = [anyio.streams.text.TextReceiveStream(stream) for stream in raw_streams]
async with CombinedReceiveStream(streams) as outputs:
async for idx, data in outputs:
is_stderr = idx_to_is_stderr[idx]
print_data(data, is_stderr=is_stderr)
def main():
anyio.run(amain)
if __name__ == "__main__":
main()
However, this CombinedReceiveStream solution is somewhat ugly, and I would some solution should already exist. What am I overlooking?
|
[
"This should be more safe and idiomatic.\nclass CtxObj:\n \"\"\"\n Add an async context manager that calls `_ctx` to run the context.\n\n Usage::\n class Foo(CtxObj):\n @asynccontextmanager\n async def _ctx(self):\n yield self # or whatever\n\n async with Foo() as self_or_whatever:\n pass\n \"\"\"\n\n async def __aenter__(self):\n self.__ctx = ctx = self._ctx() # pylint: disable=E1101,W0201\n return await ctx.__aenter__()\n\n def __aexit__(self, *tb):\n return self.__ctx.__aexit__(*tb)\n\n\[email protected](eq=False)\nclass CombinedReceiveStream(CtxObj):\n \"\"\"Combines multiple streams into a single one, annotating each item with position index of the origin stream\"\"\"\n\n streams: Sequence[anyio.abc.ObjectReceiveStream[T_Item]]\n max_buffer_size_items: int = 32\n\n def __post_init__(self) -> None:\n self._queue_send, self._queue_receive = anyio.create_memory_object_stream(\n max_buffer_size=self.max_buffer_size_items,\n # Should be: `item_type=tuple[int, T_Item] | None`\n )\n self._pending = set(range(len(self.streams)))\n\n @asynccontextmanager\n async def _ctx(self):\n async with anyio.create_task_group() as tg:\n for i in self._pending:\n tg.start_soon(self._copier, i)\n\n yield self\n tg.cancel_scope.cancel()\n\n\n async def _copier(self, idx: int) -> None:\n stream = self.streams[idx]\n async for item in stream:\n await self._queue_send.send((idx, item))\n self._pending.remove(idx)\n if not self._pending:\n await self._queue_send.aclose()\n\n\n async def receive(self) -> tuple[int, T_Item]:\n return await self._queue_receive.receive()\n\n def __aiter__(self):\n return self\n\n async def __anext__(self):\n try:\n return await self.receive()\n except anyio.EndOfStream:\n raise StopAsyncIteration() from None\n\n"
] |
[
1
] |
[] |
[] |
[
"anyio",
"python",
"python_trio"
] |
stackoverflow_0074661106_anyio_python_python_trio.txt
|
Q:
Tell pylint that a given decorator is a classmethod
How can I modify my pylintrc so that a given decorator is interpreted as a classmethod.
pydantic defines a validator decorator to allow for attribute validation of model classes and operates as a class method. pylint throws a
E0213: Method 'has_risk_assigned' should have "self" as first argument (no-self-argument)
for a validator method declared as:
from pydantic import BaseModel, validator
class RiskyRecord(BaseModel):
# ... attributes ...
@validator('risk')
def has_risk_assigned(cls, v):
# ... make sure that risk is properly assigned ...
How can I configure my pylintrc such that it views this decorator as defining a class (instead of instance) method?
Note: I want a solution in terms of pylintrc since there are multiple classes in this module that each use multiple validators; managing this warning in one place is more desirable.
I only see two classmethod related features in my current pylintrc; both only relate to the valid name(s) for the first argument.
A:
To configure pylint to interpret a decorator as defining a class method, you can add the following to your pylintrc file:
[TYPECHECK]
ignored-decorators=validator
This will tell pylint to ignore the validator decorator when checking for the first argument of a method. Note that this will not affect other checks or warnings related to the validator decorator, such as the use of cls instead of self. If you want to suppress those as well, you can add the cls-is-class option to your pylintrc file:
[TYPECHECK]
ignored-decorators=validator
cls-is-class=true
This will tell pylint to interpret the cls argument in methods decorated with validator as referring to the class itself, rather than an instance of the class.
|
Tell pylint that a given decorator is a classmethod
|
How can I modify my pylintrc so that a given decorator is interpreted as a classmethod.
pydantic defines a validator decorator to allow for attribute validation of model classes and operates as a class method. pylint throws a
E0213: Method 'has_risk_assigned' should have "self" as first argument (no-self-argument)
for a validator method declared as:
from pydantic import BaseModel, validator
class RiskyRecord(BaseModel):
# ... attributes ...
@validator('risk')
def has_risk_assigned(cls, v):
# ... make sure that risk is properly assigned ...
How can I configure my pylintrc such that it views this decorator as defining a class (instead of instance) method?
Note: I want a solution in terms of pylintrc since there are multiple classes in this module that each use multiple validators; managing this warning in one place is more desirable.
I only see two classmethod related features in my current pylintrc; both only relate to the valid name(s) for the first argument.
|
[
"To configure pylint to interpret a decorator as defining a class method, you can add the following to your pylintrc file:\n[TYPECHECK]\nignored-decorators=validator\n\nThis will tell pylint to ignore the validator decorator when checking for the first argument of a method. Note that this will not affect other checks or warnings related to the validator decorator, such as the use of cls instead of self. If you want to suppress those as well, you can add the cls-is-class option to your pylintrc file:\n[TYPECHECK]\nignored-decorators=validator\ncls-is-class=true\n\nThis will tell pylint to interpret the cls argument in methods decorated with validator as referring to the class itself, rather than an instance of the class.\n"
] |
[
1
] |
[] |
[] |
[
"class_method",
"pylint",
"pylintrc",
"python"
] |
stackoverflow_0074661891_class_method_pylint_pylintrc_python.txt
|
Q:
VSCode did not recognize my Installed Python Packages
VSCode on Windows with Python. Installed Python Extension by Don, not sure it makes any difference, but thought of giving my Environment
Using VSCode for Python and in that process, i installed metapy package.
I was able to run this metapy inside the terminal windows in VSCode but not in the Editor
PS C:\Users\xxx> python --version
Python 3.6.2 :: Anaconda custom (64-bit)
PS C:\Users\xxx> pip --version
pip 9.0.1 from C:\Program Files\Anaconda3\lib\site-packages (python 3.6)
PS C:\Users\xxx> pip install metapy
Requirement already satisfied: metapy in c:\program files\anaconda3\lib\site-packages
PS C:\Users\xxx> python
Python 3.6.2 |Anaconda custom (64-bit)| (default, Jul 20 2017, 12:30:02) [MSC v.1900 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import metapy
>>> metapy.log_to_stderr()
From Terminal Window, it works fine, but metapy Package is not recognized from the Editor. Do i have to set something for Editor to recognize my Packages.
Try to set the Python path
"python.pythonPath": "C:\Program Files\Python36\python.exe"
A:
Sometimes VS code does not recognize some libraries for me too. It's happening to me because I have multiple Python versions installed. To solve this problem I just swap to the version that accepts it.
You can change versions like that:
In your bottom right corner, you will have {} Python <Version> - click on it.
Just choose the right version for you.
|
VSCode did not recognize my Installed Python Packages
|
VSCode on Windows with Python. Installed Python Extension by Don, not sure it makes any difference, but thought of giving my Environment
Using VSCode for Python and in that process, i installed metapy package.
I was able to run this metapy inside the terminal windows in VSCode but not in the Editor
PS C:\Users\xxx> python --version
Python 3.6.2 :: Anaconda custom (64-bit)
PS C:\Users\xxx> pip --version
pip 9.0.1 from C:\Program Files\Anaconda3\lib\site-packages (python 3.6)
PS C:\Users\xxx> pip install metapy
Requirement already satisfied: metapy in c:\program files\anaconda3\lib\site-packages
PS C:\Users\xxx> python
Python 3.6.2 |Anaconda custom (64-bit)| (default, Jul 20 2017, 12:30:02) [MSC v.1900 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import metapy
>>> metapy.log_to_stderr()
From Terminal Window, it works fine, but metapy Package is not recognized from the Editor. Do i have to set something for Editor to recognize my Packages.
Try to set the Python path
"python.pythonPath": "C:\Program Files\Python36\python.exe"
|
[
"Sometimes VS code does not recognize some libraries for me too. It's happening to me because I have multiple Python versions installed. To solve this problem I just swap to the version that accepts it.\nYou can change versions like that:\n\nIn your bottom right corner, you will have {} Python <Version> - click on it.\n\nJust choose the right version for you.\n\n\n"
] |
[
0
] |
[] |
[] |
[
"python_3.x",
"visual_studio_code"
] |
stackoverflow_0046131620_python_3.x_visual_studio_code.txt
|
Q:
always 301 Moved Permanently
I am an absolute beginner at internet programming. I tried to do some http request manually. I connected to youtube with telnet and tried to get html file on the main page:
ivan@LAPTOP-JSSQ9B0M:/mnt/d/PROJECTS$ telnet www.youtube.com 80
Trying 173.194.222.198...
Connected to wide-youtube.l.google.com.
Escape character is '^]'.
GET / HTTP/1.1
Host: www.youtube.com
But for some reason the response is:
HTTP/1.1 301 Moved Permanently
Content-Type: application/binary
X-Content-Type-Options: nosniff
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Pragma: no-cache
Expires: Mon, 01 Jan 1990 00:00:00 GMT
Date: Thu, 01 Dec 2022 18:13:19 GMT
Location: https://www.youtube.com/
Server: ESF
Content-Length: 0
X-XSS-Protection: 0
X-Frame-Options: SAMEORIGIN
As far as i know 301 error means that the resource i am trying to get has been moved to other place and the new address is placed in response in "location" header. But as you can see, address i am trying to connect and in location header are the same. When i try to do it with other sites (for example github.com, www.youtube.com, www.twitch.tv), i get the same thing - 301 Moved Permanently but the address in location header and mine are the same.
Question: Why does it happen and how to fix it?
A:
Good on you for being curious and trying stuff on your own!
What you are doing with "telnet" isn't different at all than what your browser is doing when connecting to youtube.com. (well, there is HTTP 2/3 and QUIC, but that's a topic for another day).
It's establishing a raw TCP Connection to port 80 on youtube, and sending an HTTP request.
Now, you can "type it in" since HTTP is a plain-text protocol.
(Those are pretty rare and are getting even more scarce)
Anyway,
Youtube runs an HTTP Server on port 80, the default port for HTTP, (i.e. - when browsers go to youtube.com they set up a connection to port 80 and then try sending an HTTP GET Request) serving plain HTTP.
However!
HTTP is unencrypted, (note! it has nothing to do with it being a textual protocol),
so, regardless of your request parameters (uri, user-agent, etc),
that HTTP Server chooses (well, "was configured") to always redirects you to https://www.youtube.com, via the location header. (or in twitch's case to https://www.twitch.tv)
but the address in location header and mine are the same
notice the https?
Location: https://www.youtube.com/
that tells your browser to go to that website instead.
Via HTTPS - which is HTTP over TLS.
And browsers know that the default port for HTTPS is 443.
You "telnet" client isn't actually an http-client, and it doesn't automatically react to that "instruction".
But let's say you, the "client" in this case, do honor that redirection.
That "over TLS" part? that's tricky. you can't "type it in" yourself with plain-text letters. it's a pretty complex process, and "binary" (i.e. not in English) almost in it's entirety, depending on the version of TLS.
So you can't "fix it", it's a security feature -
their (youtube's and twitch's) policy is (rightfully so) - Never serve content over HTTP,
so that bad-guys who may be "snooping" in the middle can't observe / modify the requests / responses.
You can try with other websites that don't behave like that
for example, example.com
if you want to "programmatically" connect to HTTPS servers, you can do that with any cli-http client, like wget, curl, or invoke-webrequest with windows powershell,
or, with almost any programming language -
like with the requests module in python,
the "fetch" api in JS,
the Java HttpClient,
and so on.
or, if you're feeling particularly cool-
use a TLS wrapper and send the HTTP request over that.
Does that make sense?
feel free to drop any further questions below!
|
always 301 Moved Permanently
|
I am an absolute beginner at internet programming. I tried to do some http request manually. I connected to youtube with telnet and tried to get html file on the main page:
ivan@LAPTOP-JSSQ9B0M:/mnt/d/PROJECTS$ telnet www.youtube.com 80
Trying 173.194.222.198...
Connected to wide-youtube.l.google.com.
Escape character is '^]'.
GET / HTTP/1.1
Host: www.youtube.com
But for some reason the response is:
HTTP/1.1 301 Moved Permanently
Content-Type: application/binary
X-Content-Type-Options: nosniff
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Pragma: no-cache
Expires: Mon, 01 Jan 1990 00:00:00 GMT
Date: Thu, 01 Dec 2022 18:13:19 GMT
Location: https://www.youtube.com/
Server: ESF
Content-Length: 0
X-XSS-Protection: 0
X-Frame-Options: SAMEORIGIN
As far as i know 301 error means that the resource i am trying to get has been moved to other place and the new address is placed in response in "location" header. But as you can see, address i am trying to connect and in location header are the same. When i try to do it with other sites (for example github.com, www.youtube.com, www.twitch.tv), i get the same thing - 301 Moved Permanently but the address in location header and mine are the same.
Question: Why does it happen and how to fix it?
|
[
"Good on you for being curious and trying stuff on your own!\nWhat you are doing with \"telnet\" isn't different at all than what your browser is doing when connecting to youtube.com. (well, there is HTTP 2/3 and QUIC, but that's a topic for another day).\nIt's establishing a raw TCP Connection to port 80 on youtube, and sending an HTTP request.\nNow, you can \"type it in\" since HTTP is a plain-text protocol.\n(Those are pretty rare and are getting even more scarce)\nAnyway,\nYoutube runs an HTTP Server on port 80, the default port for HTTP, (i.e. - when browsers go to youtube.com they set up a connection to port 80 and then try sending an HTTP GET Request) serving plain HTTP.\nHowever!\nHTTP is unencrypted, (note! it has nothing to do with it being a textual protocol),\nso, regardless of your request parameters (uri, user-agent, etc),\nthat HTTP Server chooses (well, \"was configured\") to always redirects you to https://www.youtube.com, via the location header. (or in twitch's case to https://www.twitch.tv)\n\nbut the address in location header and mine are the same\n\nnotice the https?\nLocation: https://www.youtube.com/\nthat tells your browser to go to that website instead.\nVia HTTPS - which is HTTP over TLS.\nAnd browsers know that the default port for HTTPS is 443.\nYou \"telnet\" client isn't actually an http-client, and it doesn't automatically react to that \"instruction\".\nBut let's say you, the \"client\" in this case, do honor that redirection.\nThat \"over TLS\" part? that's tricky. you can't \"type it in\" yourself with plain-text letters. it's a pretty complex process, and \"binary\" (i.e. not in English) almost in it's entirety, depending on the version of TLS.\nSo you can't \"fix it\", it's a security feature -\ntheir (youtube's and twitch's) policy is (rightfully so) - Never serve content over HTTP,\nso that bad-guys who may be \"snooping\" in the middle can't observe / modify the requests / responses.\nYou can try with other websites that don't behave like that\nfor example, example.com\nif you want to \"programmatically\" connect to HTTPS servers, you can do that with any cli-http client, like wget, curl, or invoke-webrequest with windows powershell,\nor, with almost any programming language -\nlike with the requests module in python,\nthe \"fetch\" api in JS,\nthe Java HttpClient,\nand so on.\nor, if you're feeling particularly cool-\nuse a TLS wrapper and send the HTTP request over that.\nDoes that make sense?\nfeel free to drop any further questions below!\n"
] |
[
1
] |
[] |
[] |
[
"http",
"networking",
"telnet"
] |
stackoverflow_0074646764_http_networking_telnet.txt
|
Q:
Excel VBA - Find column, then search found column
I'm new to Excel VBA, and after quite some time attempting to solve my issue, I am unable to create a working solution. The attached image is a mock up of an actual table I'm working with. I would like to:
#1 Define a date in the VBA to search for in the blue row (e.g. 05/12/2022)
#2 Once found, find all values of both 'Apple' and 'Pear' in that yellow column (Apple = 4 times, Pear = 1 time)
#3 Look at the Green column, and store the names for all matches for 'Apple' in one array (later to be used in a string), and all matches for 'Pear' in another array
#4 Input a comma delimited return of both arrays into a cell within the spreadsheet
Step #1 was completed successfully using the following code:
Public Sub MyVBA()
Dim c As Range
Dim colNum As Integer
Dim wkb As Excel.Workbook
Dim wks As Excel.Worksheet
Set wkb = Excel.Workbooks("MyOtherWorkbook.xlsx")
Set wks = wkb.Worksheets("SheetInWorkbook")
For Each c In wks.Range("1:1")
If c.Value = "05/12/2022" Then
colNum = c.Column
End If
Next c
End Sub
Step #2 attempt:
For Each c In wks.Columns(colNum)
If c.Value = "Apple" Then
MsgBox "Apple is " & c.Address
End If
Next c
This is one of various attempts I've made at Step #2, but each time it produces errors. Advice on how to go forward with Step #2 and #3 would be appreciated.
A:
If your Excel version supports FILTER():
Sub Tester()
Dim ws As Worksheet, m, dt As Date, rng As Range, res
Dim dict As Object, el, rngNames As Range, f
Set dict = CreateObject("scripting.dictionary")
dict.CompareMode = 1 'case-insensitive
dt = DateValue("12/5/2022") 'date to be searched on
Set ws = ActiveSheet
m = Application.Match(CLng(dt), ws.Rows(1), 0)
If Not IsError(m) Then 'got a match
Set rng = ws.Range(ws.Cells(2, m), ws.Cells(Rows.Count, m).End(xlUp)) 'fruits for this date
Set rngNames = rng.EntireRow.Columns("A") 'names in ColA
f = "FILTER(" & rngNames.Address() & "," & rng.Address() & "=""<v>"")" 'prep the formula
For Each el In Array("Apple", "Pear", "Melon") 'loop over fruits to be counted
res = ws.Evaluate(Replace(f, "<v>", el))
dict(el) = res
Next el
DumpDict dict 'show results
Else
MsgBox "Date not found"
End If
End Sub
'display dictionary contents to the Immediate pane
Sub DumpDict(dict As Object)
Dim k, el, v, i
For Each k In dict
Debug.Print k
v = dict(k)
If IsError(v) Then
Debug.Print , "No names"
Else
For i = LBound(v, 1) To UBound(v, 1)
Debug.Print , v(i, 1)
Next i
End If
Next k
End Sub
A:
I've wrote something but you really need to see how its laid out on the excel file which you can find here (obviously download and run, it wont work in google sheets).
https://drive.google.com/file/d/1lSLtyYWAMVb4QM52oItbe4yK74ezquPv/view?usp=sharing
You list the data in worksheet "data", the search values (apples, pears) in worksheet "search" - then it writes them to a new worksheet called output
To run, click the big "RUN" button on the data worksheet, or run the sub.
With a little tinkering you can probably make it exactly how you want, as I didn't 100% understand your question.
Public dataWs As Worksheet
Public outputWs As Worksheet
Public searchWs As Worksheet
Function create_date(string_date)
'create a date date not a string date
Dim day, month, year As String
Dim dte As Date
day = Left(string_date, 2)
month = Mid(string_date, 4, 2)
year = Right(string_date, 2)
dte = DateSerial(Int(year), Int(month), Int(day))
create_date = dte
End Function
Function clear_outout()
'clear output worksheet
outputWs.Range("a1:z99999").Clear
End Function
Function addData(name, object)
Dim x, lr As Integer
lr = outputWs.Cells(Rows.Count, 1).End(xlUp).Row
'add columns headers if not there
If lr = 1 Then
outputWs.Cells(1, 1) = "NAME"
outputWs.Cells(1, 2) = "OBJECT"
outputWs.Cells(1, 3) = "TIMES"
End If
For x = 2 To lr + 1
If x = lr + 1 Then
'if not in list add name object
outputWs.Cells(x, 1) = name
outputWs.Cells(x, 2) = object
outputWs.Cells(x, 3) = 1
Exit For
ElseIf outputWs.Cells(x, 1) = name And outputWs.Cells(x, 2) = object Then
' if in list increment count
outputWs.Cells(x, 3) = outputWs.Cells(x, 3) + 1
Exit For
End If
Next x
End Function
Function check_search_list(search_val)
' checks to see if input value is a match with one listed in range
Dim search_lr As Integer
'this is the search last row
search_lr = searchWs.Cells(Rows.Count, 1).End(xlUp).Row
'loop each search val
For x = 2 To search_lr
If searchWs.Cells(x, 1) = search_val Then
check_search_list = True
End If
Next x
End Function
Sub check_data()
''''
''' Run the thing
''''
'set the worksheets
Set dataWs = Worksheets("data")
Set outputWs = Worksheets("output")
Set searchWs = Worksheets("search")
Dim x, y, z, lr, lc As Integer
Dim searchDate As String
Dim found_column As Boolean
'clear output sheet
clear_outout
'this gets the pos of the last filled column to the left
lc = dataWs.Cells(1, Columns.Count).End(xlToLeft).Column
'get last row
lr = dataWs.Cells(Rows.Count, 1).End(xlUp).Row
'get the date from the user
searchDate = InputBox("Whats the date in format dd/mm/yyyy")
'create flag for date found
found_column = False
'loop columns and look for dates (as proper dates and not strings)
For y = 2 To lc
'if found then add all columns - this compares the date object, not strin g
If create_date(dataWs.Cells(1, y)) = create_date(searchDate) Then
found_column = True
'loop eaach row
For x = 2 To lr
If check_search_list(dataWs.Cells(x, y)) Then
' add the data if search value found
addData dataWs.Cells(x, 1), dataWs.Cells(x, y)
End If
Next x
'end loop as column already found
Exit For
End If
Next y
'open data if found else show message
If found_column Then
outputWs.Activate
Else
MsgBox "Date not found", vbCritical
End If
End Sub
A:
See the comments inside the code for the description of how this works.
There is 1 main Sub, and two helper Functions.
If I were expanding this project I would also split parts of the main Sub into more Functions, to prevent this from getting too messy. For the sake of simplicity in this answer, I kept a lot in the main Sub.
Public Sub MyVBA()
Dim c As Range
Dim colNum As Long
Dim wkb As Excel.Workbook
Dim wks As Excel.Worksheet
Set wkb = Excel.Workbooks("MyOtherWorkbook.xlsx")
Set wks = wkb.Worksheets("SheetInWorkbook")
'Get a Date as input from the user
Dim UserDate As Date: UserDate = GetUserDate()
'Exit if the user has declined to input
If UserDate = 0 Then Exit Sub
'Search for the last filled row and column
'This can be used to trim the loops so we aren't iterating through a million empty cells
Dim LastRow As Long
LastRow = wks.Columns(1).Rows(wks.Rows.Count).End(xlUp).Row
Dim LastColumn As Long
LastColumn = wks.Rows(1).Columns(wks.Columns.Count).End(xlToLeft).Column
'For each cell in Row 1
For Each c In wks.Range("1:1").Resize(, LastColumn).Cells
'if the cell contains a date & the date matches the user input
If IsDate(c.Value) Then
If CDate(c.Value) = UserDate Then
colNum = c.Column
'if the column is found, stop searching
Exit For
End If
End If
Next c
'Exit if Column not found
If colNum = 0 Then Exit Sub
'KeyRanges is a dictionary, this is an object that holds Key & Item pairs
'There is an entry in the dictionary for each keyword
'The entry's Key is the Keyword (Apple or Pear), and the item is a Collection of worksheet ranges where that keyword was found
Dim KeyRanges As Object
Set KeyRanges = CreateObject("Scripting.Dictionary")
'List of KeyWords
Dim KeyWords() As String: KeyWords = Split("Apple,Pear", ",")
'Adding an entry to the dictionary for each keyword
Dim KeyWord As Variant
For Each KeyWord In KeyWords
KeyRanges.Add KeyWord, New Collection
Next
'search the column for matches
For Each c In wks.Columns(colNum).Resize(LastRow).Cells
'compare the cell value to each keyword
For Each KeyWord In KeyWords
If c.Value = KeyWord Then
'If the cell value matches one of the keywords
'Go into the dictionary entry for that keyword
'and save the cell from this row, in column A, into the collection
KeyRanges(KeyWord).Add c.EntireRow.Cells(1)
End If
Next
Next c
'From your example for 05/12/2022
'KeyRanges now contains 2 entries
'KeyRanges("Apple") contains a Collection
'The Collection contains 4 items
'Range("A2")
'Range("A5")
'Range("A7")
'Range("A8")
'KeyRanges("Pear") contains a Collection
'The Collection contains 1 item
'Range("A4")
'Concatenate into CSV
'CSVs is an array to contain the CSV for each KeyWord
Dim CSVs() As String
ReDim CSVs(UBound(KeyWords))
'For each KeyWord
Dim i As Long
For i = 0 To UBound(KeyWords)
'Take the collection from each entry in KeyRanges
'Give it to a function which can turn collections into CSVs
CSVs(i) = JoinCollection(KeyRanges(KeyWords(i)))
Next
'Join all the CSVs into a single CSV & Output to Worksheet
Range("A1").Value = Join(CSVs, ",")
End Sub
Function GetUserDate() As Date
'Get Date From User
Dim UserInput As String
Do
UserInput = Application.InputBox(Prompt:="Date:", Default:=Date, Type:=2)
If UserInput = "" Then
'User declined to input
Exit Function
ElseIf Not IsDate(UserInput) Then
'User input not valid
UserInput = ""
MsgBox "Please enter a valid date.", vbOKOnly, "Error"
End If
Loop While UserInput = ""
GetUserDate = CDate(UserInput)
End Function
Function JoinCollection(Col As Collection, Optional Delimiter As String = ",") As String
If Col.Count = 0 Then Exit Function
Dim ReturnString As String
ReturnString = Col(1)
If Col.Count > 1 Then
Dim i As Long
For i = 2 To Col.Count
ReturnString = ReturnString & Delimiter & Col(i)
Next
End If
JoinCollection = ReturnString
End Function
I purposefully wrote the code with the idea that "Apple" & "Pear" are not the real keywords you're searching for, and the list of keywords may be much larger than 2. The dictionary object can contain several thousand entries with no issue or slowdowns.
In my example code, you can easily change and expand the list of keywords by changing a single line : KeyWords = Split("Apple,Pear", ",").
All of the code that follows that line scales off of the size of the KeyWords array, and will not need editing if the array changes size.
|
Excel VBA - Find column, then search found column
|
I'm new to Excel VBA, and after quite some time attempting to solve my issue, I am unable to create a working solution. The attached image is a mock up of an actual table I'm working with. I would like to:
#1 Define a date in the VBA to search for in the blue row (e.g. 05/12/2022)
#2 Once found, find all values of both 'Apple' and 'Pear' in that yellow column (Apple = 4 times, Pear = 1 time)
#3 Look at the Green column, and store the names for all matches for 'Apple' in one array (later to be used in a string), and all matches for 'Pear' in another array
#4 Input a comma delimited return of both arrays into a cell within the spreadsheet
Step #1 was completed successfully using the following code:
Public Sub MyVBA()
Dim c As Range
Dim colNum As Integer
Dim wkb As Excel.Workbook
Dim wks As Excel.Worksheet
Set wkb = Excel.Workbooks("MyOtherWorkbook.xlsx")
Set wks = wkb.Worksheets("SheetInWorkbook")
For Each c In wks.Range("1:1")
If c.Value = "05/12/2022" Then
colNum = c.Column
End If
Next c
End Sub
Step #2 attempt:
For Each c In wks.Columns(colNum)
If c.Value = "Apple" Then
MsgBox "Apple is " & c.Address
End If
Next c
This is one of various attempts I've made at Step #2, but each time it produces errors. Advice on how to go forward with Step #2 and #3 would be appreciated.
|
[
"If your Excel version supports FILTER():\nSub Tester()\n Dim ws As Worksheet, m, dt As Date, rng As Range, res\n Dim dict As Object, el, rngNames As Range, f\n \n Set dict = CreateObject(\"scripting.dictionary\")\n dict.CompareMode = 1 'case-insensitive\n \n dt = DateValue(\"12/5/2022\") 'date to be searched on\n \n Set ws = ActiveSheet\n \n m = Application.Match(CLng(dt), ws.Rows(1), 0)\n If Not IsError(m) Then 'got a match\n Set rng = ws.Range(ws.Cells(2, m), ws.Cells(Rows.Count, m).End(xlUp)) 'fruits for this date\n Set rngNames = rng.EntireRow.Columns(\"A\") 'names in ColA\n f = \"FILTER(\" & rngNames.Address() & \",\" & rng.Address() & \"=\"\"<v>\"\")\" 'prep the formula\n For Each el In Array(\"Apple\", \"Pear\", \"Melon\") 'loop over fruits to be counted\n res = ws.Evaluate(Replace(f, \"<v>\", el))\n dict(el) = res\n Next el\n DumpDict dict 'show results\n Else\n MsgBox \"Date not found\"\n End If\nEnd Sub\n\n'display dictionary contents to the Immediate pane\nSub DumpDict(dict As Object)\n Dim k, el, v, i\n For Each k In dict\n Debug.Print k\n v = dict(k)\n If IsError(v) Then\n Debug.Print , \"No names\"\n Else\n For i = LBound(v, 1) To UBound(v, 1)\n Debug.Print , v(i, 1)\n Next i\n End If\n Next k\nEnd Sub\n\n",
"I've wrote something but you really need to see how its laid out on the excel file which you can find here (obviously download and run, it wont work in google sheets).\nhttps://drive.google.com/file/d/1lSLtyYWAMVb4QM52oItbe4yK74ezquPv/view?usp=sharing\nYou list the data in worksheet \"data\", the search values (apples, pears) in worksheet \"search\" - then it writes them to a new worksheet called output\nTo run, click the big \"RUN\" button on the data worksheet, or run the sub.\nWith a little tinkering you can probably make it exactly how you want, as I didn't 100% understand your question.\nPublic dataWs As Worksheet\nPublic outputWs As Worksheet\nPublic searchWs As Worksheet\n\n\nFunction create_date(string_date)\n 'create a date date not a string date\n Dim day, month, year As String\n Dim dte As Date\n \n day = Left(string_date, 2)\n month = Mid(string_date, 4, 2)\n year = Right(string_date, 2)\n dte = DateSerial(Int(year), Int(month), Int(day))\n create_date = dte\n \nEnd Function\n\nFunction clear_outout()\n\n 'clear output worksheet\n\n outputWs.Range(\"a1:z99999\").Clear\n\n\nEnd Function\n\nFunction addData(name, object)\n\n\n Dim x, lr As Integer\n \n lr = outputWs.Cells(Rows.Count, 1).End(xlUp).Row\n \n 'add columns headers if not there\n If lr = 1 Then\n outputWs.Cells(1, 1) = \"NAME\"\n outputWs.Cells(1, 2) = \"OBJECT\"\n outputWs.Cells(1, 3) = \"TIMES\"\n End If\n \n For x = 2 To lr + 1\n If x = lr + 1 Then\n 'if not in list add name object\n outputWs.Cells(x, 1) = name\n outputWs.Cells(x, 2) = object\n outputWs.Cells(x, 3) = 1\n Exit For\n ElseIf outputWs.Cells(x, 1) = name And outputWs.Cells(x, 2) = object Then\n ' if in list increment count\n outputWs.Cells(x, 3) = outputWs.Cells(x, 3) + 1\n Exit For\n End If\n Next x\n\nEnd Function\n\nFunction check_search_list(search_val)\n\n ' checks to see if input value is a match with one listed in range \n\n Dim search_lr As Integer\n \n 'this is the search last row\n search_lr = searchWs.Cells(Rows.Count, 1).End(xlUp).Row\n 'loop each search val\n For x = 2 To search_lr\n If searchWs.Cells(x, 1) = search_val Then\n check_search_list = True\n End If\n Next x\n \nEnd Function\n\n\nSub check_data()\n ''''\n ''' Run the thing\n ''''\n\n 'set the worksheets\n Set dataWs = Worksheets(\"data\")\n Set outputWs = Worksheets(\"output\")\n Set searchWs = Worksheets(\"search\")\n \n Dim x, y, z, lr, lc As Integer\n Dim searchDate As String\n Dim found_column As Boolean\n \n \n 'clear output sheet\n clear_outout\n 'this gets the pos of the last filled column to the left\n lc = dataWs.Cells(1, Columns.Count).End(xlToLeft).Column\n 'get last row\n lr = dataWs.Cells(Rows.Count, 1).End(xlUp).Row\n 'get the date from the user\n searchDate = InputBox(\"Whats the date in format dd/mm/yyyy\")\n 'create flag for date found\n found_column = False\n 'loop columns and look for dates (as proper dates and not strings)\n For y = 2 To lc\n 'if found then add all columns - this compares the date object, not strin g\n If create_date(dataWs.Cells(1, y)) = create_date(searchDate) Then\n found_column = True\n 'loop eaach row\n For x = 2 To lr\n If check_search_list(dataWs.Cells(x, y)) Then\n ' add the data if search value found\n addData dataWs.Cells(x, 1), dataWs.Cells(x, y)\n End If\n Next x\n 'end loop as column already found\n Exit For\n End If\n Next y\n \n 'open data if found else show message\n If found_column Then\n outputWs.Activate\n Else\n MsgBox \"Date not found\", vbCritical\n End If\n \nEnd Sub\n\n",
"See the comments inside the code for the description of how this works.\nThere is 1 main Sub, and two helper Functions.\nIf I were expanding this project I would also split parts of the main Sub into more Functions, to prevent this from getting too messy. For the sake of simplicity in this answer, I kept a lot in the main Sub.\nPublic Sub MyVBA()\n\n Dim c As Range\n Dim colNum As Long\n Dim wkb As Excel.Workbook\n Dim wks As Excel.Worksheet\n\n Set wkb = Excel.Workbooks(\"MyOtherWorkbook.xlsx\")\n Set wks = wkb.Worksheets(\"SheetInWorkbook\")\n \n 'Get a Date as input from the user\n Dim UserDate As Date: UserDate = GetUserDate()\n 'Exit if the user has declined to input\n If UserDate = 0 Then Exit Sub\n \n 'Search for the last filled row and column\n 'This can be used to trim the loops so we aren't iterating through a million empty cells\n Dim LastRow As Long\n LastRow = wks.Columns(1).Rows(wks.Rows.Count).End(xlUp).Row\n \n Dim LastColumn As Long\n LastColumn = wks.Rows(1).Columns(wks.Columns.Count).End(xlToLeft).Column\n \n 'For each cell in Row 1\n For Each c In wks.Range(\"1:1\").Resize(, LastColumn).Cells\n 'if the cell contains a date & the date matches the user input\n If IsDate(c.Value) Then\n If CDate(c.Value) = UserDate Then\n colNum = c.Column\n 'if the column is found, stop searching\n Exit For\n End If\n End If\n Next c\n 'Exit if Column not found\n If colNum = 0 Then Exit Sub\n \n 'KeyRanges is a dictionary, this is an object that holds Key & Item pairs\n 'There is an entry in the dictionary for each keyword\n 'The entry's Key is the Keyword (Apple or Pear), and the item is a Collection of worksheet ranges where that keyword was found\n Dim KeyRanges As Object\n Set KeyRanges = CreateObject(\"Scripting.Dictionary\")\n \n 'List of KeyWords\n Dim KeyWords() As String: KeyWords = Split(\"Apple,Pear\", \",\")\n \n 'Adding an entry to the dictionary for each keyword\n Dim KeyWord As Variant\n For Each KeyWord In KeyWords\n KeyRanges.Add KeyWord, New Collection\n Next\n \n 'search the column for matches\n For Each c In wks.Columns(colNum).Resize(LastRow).Cells\n 'compare the cell value to each keyword\n For Each KeyWord In KeyWords\n If c.Value = KeyWord Then\n 'If the cell value matches one of the keywords\n 'Go into the dictionary entry for that keyword\n 'and save the cell from this row, in column A, into the collection\n KeyRanges(KeyWord).Add c.EntireRow.Cells(1)\n End If\n Next\n Next c\n \n 'From your example for 05/12/2022\n 'KeyRanges now contains 2 entries\n 'KeyRanges(\"Apple\") contains a Collection\n 'The Collection contains 4 items\n 'Range(\"A2\")\n 'Range(\"A5\")\n 'Range(\"A7\")\n 'Range(\"A8\")\n 'KeyRanges(\"Pear\") contains a Collection\n 'The Collection contains 1 item\n 'Range(\"A4\")\n \n 'Concatenate into CSV\n 'CSVs is an array to contain the CSV for each KeyWord\n Dim CSVs() As String\n ReDim CSVs(UBound(KeyWords))\n \n 'For each KeyWord\n Dim i As Long\n For i = 0 To UBound(KeyWords)\n 'Take the collection from each entry in KeyRanges\n 'Give it to a function which can turn collections into CSVs\n CSVs(i) = JoinCollection(KeyRanges(KeyWords(i)))\n Next\n \n 'Join all the CSVs into a single CSV & Output to Worksheet\n Range(\"A1\").Value = Join(CSVs, \",\")\n \nEnd Sub\nFunction GetUserDate() As Date\n 'Get Date From User\n Dim UserInput As String\n Do\n UserInput = Application.InputBox(Prompt:=\"Date:\", Default:=Date, Type:=2)\n If UserInput = \"\" Then\n 'User declined to input\n Exit Function\n ElseIf Not IsDate(UserInput) Then\n 'User input not valid\n UserInput = \"\"\n MsgBox \"Please enter a valid date.\", vbOKOnly, \"Error\"\n End If\n Loop While UserInput = \"\"\n \n GetUserDate = CDate(UserInput)\nEnd Function\nFunction JoinCollection(Col As Collection, Optional Delimiter As String = \",\") As String\n If Col.Count = 0 Then Exit Function\n Dim ReturnString As String\n ReturnString = Col(1)\n If Col.Count > 1 Then\n Dim i As Long\n For i = 2 To Col.Count\n ReturnString = ReturnString & Delimiter & Col(i)\n Next\n End If\n JoinCollection = ReturnString\nEnd Function\n\nI purposefully wrote the code with the idea that \"Apple\" & \"Pear\" are not the real keywords you're searching for, and the list of keywords may be much larger than 2. The dictionary object can contain several thousand entries with no issue or slowdowns.\nIn my example code, you can easily change and expand the list of keywords by changing a single line : KeyWords = Split(\"Apple,Pear\", \",\").\nAll of the code that follows that line scales off of the size of the KeyWords array, and will not need editing if the array changes size.\n"
] |
[
1,
0,
0
] |
[] |
[] |
[
"excel",
"vba"
] |
stackoverflow_0074659786_excel_vba.txt
|
Q:
I lost the down arrow icon for the "Next Change" in my VSCode toolbar -- how do I get it back?
My git diff window in VSCode used to show an up-arrow icon (Previous Change) and a down-arrow icon (Next Change) in the toolbar.
I somehow recently lost just the down-arrow, and I can't figure out how to get it back. The "Next Change" action now instead shows up only in the "More Actions..." menu (see screenshot).
Any suggestions how to fix this? I have scoured the VSCode settings but have not been able to find anything related.
This screenshot below shows where the arrow used to be:
A:
It was weirdly hard to figure this out, but the answer turned out to be embarrassingly simple. Right-clicking on any icon in the toolbar displays a list of icons to show or hide.
|
I lost the down arrow icon for the "Next Change" in my VSCode toolbar -- how do I get it back?
|
My git diff window in VSCode used to show an up-arrow icon (Previous Change) and a down-arrow icon (Next Change) in the toolbar.
I somehow recently lost just the down-arrow, and I can't figure out how to get it back. The "Next Change" action now instead shows up only in the "More Actions..." menu (see screenshot).
Any suggestions how to fix this? I have scoured the VSCode settings but have not been able to find anything related.
This screenshot below shows where the arrow used to be:
|
[
"It was weirdly hard to figure this out, but the answer turned out to be embarrassingly simple. Right-clicking on any icon in the toolbar displays a list of icons to show or hide.\n\n"
] |
[
0
] |
[] |
[] |
[
"git",
"visual_studio_code"
] |
stackoverflow_0074553032_git_visual_studio_code.txt
|
Q:
How can I stretch an iframe to cover its parent element without CSS?
I want a JavaScript or jQuery solution to stretch or shrink the child element to cover the container using px width and height.
<div style="width:300px; height:713px;"><iframe ratio="4:1 || 4:3 || 16:9 || ..." ></frame></div>
<div style="width:713px; height:300px;"><iframe ratio="4:1 || 4:3 || 16:9 || ..." ></frame></div>
A:
finally figured it out:
w and h can be just the ratios 16:9, 4:3, etc or the actual size of the curent video
cw is the container width and ch is the container height
you can fill the container with fill or fit the video into the container with "fit"
function vidratios(w, h, cw, ch, tp='fill') {
var ratw = cw / w;
var rath = ch / h;
if(tp == 'fit'){
var ratio = ratw < rath ? ratw : rath;
} else {
var ratio = ratw > rath ? ratw : rath;
}
var nw = w * ratio;
var nh = h * ratio;
var retarr = {
'cw': nw,
'ch': nh,
'rat': ratio
};
return retarr;
}
|
How can I stretch an iframe to cover its parent element without CSS?
|
I want a JavaScript or jQuery solution to stretch or shrink the child element to cover the container using px width and height.
<div style="width:300px; height:713px;"><iframe ratio="4:1 || 4:3 || 16:9 || ..." ></frame></div>
<div style="width:713px; height:300px;"><iframe ratio="4:1 || 4:3 || 16:9 || ..." ></frame></div>
|
[
"finally figured it out:\nw and h can be just the ratios 16:9, 4:3, etc or the actual size of the curent video\ncw is the container width and ch is the container height\nyou can fill the container with fill or fit the video into the container with \"fit\"\nfunction vidratios(w, h, cw, ch, tp='fill') {\n var ratw = cw / w;\n var rath = ch / h;\n if(tp == 'fit'){\n var ratio = ratw < rath ? ratw : rath;\n } else {\n var ratio = ratw > rath ? ratw : rath;\n }\n var nw = w * ratio;\n var nh = h * ratio;\n var retarr = {\n 'cw': nw,\n 'ch': nh,\n 'rat': ratio\n };\n\n return retarr;\n}\n\n"
] |
[
0
] |
[] |
[] |
[
"javascript",
"jquery"
] |
stackoverflow_0074657012_javascript_jquery.txt
|
Q:
SSRS update a record
So we have a SSRS report that displays some information about a product but we will like to update said product. The example I was given goes as follows
Name
Location
Price
Action
Orange
Earth
$100
Update
After clicking the Update word ... the price value changes.
Name
Location
Price
Action
Orange
Earth
$150
Update
Im hoping I can link the word "Update" to a stored proc that we have that will do all the magic.
Thanks
A:
You can do this but it is not elegant.
I'm assuming you need some way of providing the new price, so that could be a parameter.
Let's take an example where you provide a price increase or decrease as a simple parameter in your report which is going to be passed to a stored proc along with the item selected in order to do the update.
First step is to create a report which looks like the example you provided. Add a parameter to this called say, pPriceAdjust. Allow blank values and set blank by default. We need to do this so the report runs initially without a parameter value being set.
Once that report looks OK, leave it to one side for now, we'll come back to that later.
Now create a new report, let's call it _sub_PriceAdjust. Add 4 parameters called pName, pLocation, pPrice, pAdjustment .
Next add a dataset query that look something like this..
UPDATE myTable SET Price = @pPrice + @pAdjustment
WHERE [Name] = @pName and [Location] = @pLocation
SELECT CONCAT(@pName, ' in location ', @pLocation, ' was updated from ', FORMAT(@pPrice, 'c2'), ' to ', FORMAT(@pPrice + @pAdjustment, 'c2')) as ReturnText
You could (and probably should) create a stored proc to do this but for the sake of simplicity it can just go directly in the dataset query.
All we have done is updated the record and then return a message as the dataset query result which can be displayed in the sub report.
Now, add a textbox to your subreport and set it to be the ReturnText field from the dataset. probably something like =FIRST(Fields!ReturnText.Value, "myDataSetName") . Make sur ethe text box is big enough to fit the whole message in.
Finally, for this subreport, add another textbox, maybe format it to look like a button, and set it action to Go To Report and choose your original report as the target, this will allow the user to click a button to get back to the original report (although the report toolbar back button might be better)
Nearly there....
Go back to the original report and in your "Update" textbox, go to properties and set the Action to Go To Report. Choose the _sub_PriceAdjust report and then add each of the 4 parameters and set their values, the first three will be the Field values from the main data set which you should be able to choose from a drop down, the final parameter (pAdjustment) will be the pPriceAdjust parameter we setup right at the start of this. There is no reason the pPriceAdjust parameter could not be called pAdjustment but I named them differently so you could see how each interacted.
Anyway, that should do it. Not pretty but it should work. There is some obvious error checking to add (is the adjustment value zero or blank for example, but I'll leave that bit to you.
To replicate your example, run the report, type 50 into the parameter and click "update" on the selected line. This should add 50 to the select price. To decrease the price back to the original amount, change the parameter to -50 and hit update.
This is completely from memory so might not be perfect but if there is something you can't figure out leave a comment and I'll refine the answer.
|
SSRS update a record
|
So we have a SSRS report that displays some information about a product but we will like to update said product. The example I was given goes as follows
Name
Location
Price
Action
Orange
Earth
$100
Update
After clicking the Update word ... the price value changes.
Name
Location
Price
Action
Orange
Earth
$150
Update
Im hoping I can link the word "Update" to a stored proc that we have that will do all the magic.
Thanks
|
[
"You can do this but it is not elegant.\nI'm assuming you need some way of providing the new price, so that could be a parameter.\nLet's take an example where you provide a price increase or decrease as a simple parameter in your report which is going to be passed to a stored proc along with the item selected in order to do the update.\nFirst step is to create a report which looks like the example you provided. Add a parameter to this called say, pPriceAdjust. Allow blank values and set blank by default. We need to do this so the report runs initially without a parameter value being set.\nOnce that report looks OK, leave it to one side for now, we'll come back to that later.\nNow create a new report, let's call it _sub_PriceAdjust. Add 4 parameters called pName, pLocation, pPrice, pAdjustment .\nNext add a dataset query that look something like this..\nUPDATE myTable SET Price = @pPrice + @pAdjustment\n WHERE [Name] = @pName and [Location] = @pLocation\n\nSELECT CONCAT(@pName, ' in location ', @pLocation, ' was updated from ', FORMAT(@pPrice, 'c2'), ' to ', FORMAT(@pPrice + @pAdjustment, 'c2')) as ReturnText\n\nYou could (and probably should) create a stored proc to do this but for the sake of simplicity it can just go directly in the dataset query.\nAll we have done is updated the record and then return a message as the dataset query result which can be displayed in the sub report.\nNow, add a textbox to your subreport and set it to be the ReturnText field from the dataset. probably something like =FIRST(Fields!ReturnText.Value, \"myDataSetName\") . Make sur ethe text box is big enough to fit the whole message in.\nFinally, for this subreport, add another textbox, maybe format it to look like a button, and set it action to Go To Report and choose your original report as the target, this will allow the user to click a button to get back to the original report (although the report toolbar back button might be better)\nNearly there....\nGo back to the original report and in your \"Update\" textbox, go to properties and set the Action to Go To Report. Choose the _sub_PriceAdjust report and then add each of the 4 parameters and set their values, the first three will be the Field values from the main data set which you should be able to choose from a drop down, the final parameter (pAdjustment) will be the pPriceAdjust parameter we setup right at the start of this. There is no reason the pPriceAdjust parameter could not be called pAdjustment but I named them differently so you could see how each interacted.\nAnyway, that should do it. Not pretty but it should work. There is some obvious error checking to add (is the adjustment value zero or blank for example, but I'll leave that bit to you.\nTo replicate your example, run the report, type 50 into the parameter and click \"update\" on the selected line. This should add 50 to the select price. To decrease the price back to the original amount, change the parameter to -50 and hit update.\nThis is completely from memory so might not be perfect but if there is something you can't figure out leave a comment and I'll refine the answer.\n"
] |
[
0
] |
[] |
[] |
[
"reporting_services",
"ssrs_2012"
] |
stackoverflow_0074660472_reporting_services_ssrs_2012.txt
|
Q:
How can I get the Physical Path property of a site
When I just list the sites with the default formatting, it shows the physical path.
PS C:\Windows\system32> $sm = Get-IISServerManager
PS C:\Windows\system32> $sm.Sites
Name ID State Physical Path Bindings
---- -- ----- ------------- --------
Default Web Site 1 Started %SystemDrive%\inetpub\wwwroot http *:80:
Test 2 Started C:\inetpub\wwwroot_Test http *:5007:
PS C:\Windows\system32>
But I can't find any corresponding property on the object.
$sm.Sites[0] | Format-List *
ApplicationDefaults : Microsoft.Web.Administration.ApplicationDefaults
Applications : {Default Web Site/, ...}
Bindings : {http *:80:}
Id : 1
Limits : Microsoft.Web.Administration.SiteLimits
LogFile : Microsoft.Web.Administration.SiteLogFile
Name : Default Web Site
ServerAutoStart : True
State : Started
TraceFailedRequestsLogging : Microsoft.Web.Administration.SiteTraceFailedRequestsLogging
VirtualDirectoryDefaults : Microsoft.Web.Administration.VirtualDirectoryDefaults
Attributes : {name, id, serverAutoStart, state}
ChildElements : {bindings, limits, logFile, traceFailedRequestsLogging...}
ElementTagName : site
IsLocallyStored : True
Methods : {Start, Stop}
RawAttributes : {[name, Default Web Site], [id, 1], [serverAutoStart, True], [state, 1]}
Schema : Microsoft.Web.Administration.ConfigurationElementSchema
Direct Question: How can I get the physical path of a site?
Indirect Question: Is there a way to find how the object is formatted by default? Then I could look up the Physical Path from there.
A:
After decompiling and debugging this for a few hours, I found the expression powershell uses internally:
$_.Applications["/"].VirtualDirectories["/"].PhysicalPath
No idea how you are supposed to find that out without a decompiler.
A:
$physicalPath = (Get-Website "Default Web Site" | Select-Object).PhysicalPath
A:
I have been using
Import-Module Webadministration
gci "IIS:\Sites\{target site name}" `
| Where-Object {$_.NodeType -eq 'application'} `
| Sort-Object -Property PhysicalPath, name
A:
This worked perfect for my needs. Just remember to define {target site name} in your script as the specific site name, in case you are running multiple sites under IIS.
Thanks @ΩmegaMan for this little gem
A:
You can get the physical path in this way:
echo (Get-WebApplication "Your application" | Select-Object).PhysicalPath
A:
2 websites on my test.
$x = Get-IISSite
$x[1].Applications.VirtualDirectories.PhysicalPath
C:\inetpub\test
$x[0].Applications.VirtualDirectories.PhysicalPath
%SystemDrive%\inetpub\wwwroot
A:
The website is the application the root, "/", path. For example, to get (and set) the physical path of the default web site:
$manager = Get-IISServerManager
$rootsite = $manager.Sites["Default Web Site"].Applications["/"].VirtualDirectories["/"]
$rootsite.PhysicalPath = "C:\my\websites\path"
$manager.CommitChanges()
|
How can I get the Physical Path property of a site
|
When I just list the sites with the default formatting, it shows the physical path.
PS C:\Windows\system32> $sm = Get-IISServerManager
PS C:\Windows\system32> $sm.Sites
Name ID State Physical Path Bindings
---- -- ----- ------------- --------
Default Web Site 1 Started %SystemDrive%\inetpub\wwwroot http *:80:
Test 2 Started C:\inetpub\wwwroot_Test http *:5007:
PS C:\Windows\system32>
But I can't find any corresponding property on the object.
$sm.Sites[0] | Format-List *
ApplicationDefaults : Microsoft.Web.Administration.ApplicationDefaults
Applications : {Default Web Site/, ...}
Bindings : {http *:80:}
Id : 1
Limits : Microsoft.Web.Administration.SiteLimits
LogFile : Microsoft.Web.Administration.SiteLogFile
Name : Default Web Site
ServerAutoStart : True
State : Started
TraceFailedRequestsLogging : Microsoft.Web.Administration.SiteTraceFailedRequestsLogging
VirtualDirectoryDefaults : Microsoft.Web.Administration.VirtualDirectoryDefaults
Attributes : {name, id, serverAutoStart, state}
ChildElements : {bindings, limits, logFile, traceFailedRequestsLogging...}
ElementTagName : site
IsLocallyStored : True
Methods : {Start, Stop}
RawAttributes : {[name, Default Web Site], [id, 1], [serverAutoStart, True], [state, 1]}
Schema : Microsoft.Web.Administration.ConfigurationElementSchema
Direct Question: How can I get the physical path of a site?
Indirect Question: Is there a way to find how the object is formatted by default? Then I could look up the Physical Path from there.
|
[
"After decompiling and debugging this for a few hours, I found the expression powershell uses internally:\n$_.Applications[\"/\"].VirtualDirectories[\"/\"].PhysicalPath\nNo idea how you are supposed to find that out without a decompiler.\n",
"$physicalPath = (Get-Website \"Default Web Site\" | Select-Object).PhysicalPath\n\n",
"I have been using \nImport-Module Webadministration\n\ngci \"IIS:\\Sites\\{target site name}\" `\n | Where-Object {$_.NodeType -eq 'application'} `\n | Sort-Object -Property PhysicalPath, name \n\n",
"This worked perfect for my needs. Just remember to define {target site name} in your script as the specific site name, in case you are running multiple sites under IIS.\nThanks @ΩmegaMan for this little gem\n",
"You can get the physical path in this way:\necho (Get-WebApplication \"Your application\" | Select-Object).PhysicalPath\n\n",
"2 websites on my test.\n$x = Get-IISSite\n$x[1].Applications.VirtualDirectories.PhysicalPath\n\nC:\\inetpub\\test\n$x[0].Applications.VirtualDirectories.PhysicalPath\n\n%SystemDrive%\\inetpub\\wwwroot\n",
"The website is the application the root, \"/\", path. For example, to get (and set) the physical path of the default web site:\n$manager = Get-IISServerManager\n$rootsite = $manager.Sites[\"Default Web Site\"].Applications[\"/\"].VirtualDirectories[\"/\"]\n$rootsite.PhysicalPath = \"C:\\my\\websites\\path\"\n$manager.CommitChanges()\n\n"
] |
[
10,
7,
4,
1,
0,
0,
0
] |
[] |
[] |
[
"iis",
"powershell"
] |
stackoverflow_0052498299_iis_powershell.txt
|
Q:
Using zoneinfo with pandas.date_range
I am trying to use zoneinfo instead of pytz. I am running into a problem using zoneinfo to initiate dates and passing it on to pd.date_range.
Below is an example of doing the exact same thing with pytz and with zoneinfo. But, while passing it to pd.date_range getting an error with the latter.
pytz example:
start_date = datetime(2021, 1, 1, 0, 0, 0)end_date = datetime(2024, 1, 1, 0, 0, 0) # exclusive end range
pt = pytz.timezone('Canada/Pacific')start_date = pt.localize(start_date)end_date = pt.localize(end_date)
pd.date_range(start_date, end_date-timedelta(days=1), freq='d')
zoneinfo example:
start_date1 = '2021-01-01 00:00:00
start_date1 = datetime.strptime(start_date1, '%Y-%m-%d %H:%M:%S').replace(microsecond=0, second=0, minute=0, tzinfo=ZoneInfo("America/Vancouver"))end_date1 = start_date1 + relativedelta(years=3)
pd.date_range(start_date1, end_date1-timedelta(days=1), freq='d')
Yet, when using zoneinfo I get the following error:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
~/Documents/GitHub/virtual/lib/python3.9/site-packages/pandas/_libs/tslibs/timezones.pyx in pandas._libs.tslibs.timezones.get_dst_info()
AttributeError: 'NoneType' object has no attribute 'total_seconds'
Exception ignored in: 'pandas._libs.tslibs.tzconversion.tz_convert_from_utc_single'
Traceback (most recent call last):
File "pandas/_libs/tslibs/timezones.pyx", line 266, in pandas._libs.tslibs.timezones.get_dst_info
AttributeError: 'NoneType' object has no attribute 'total_seconds'
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
~/Documents/GitHub/virtual/lib/python3.9/site-packages/pandas/_libs/tslibs/timezones.pyx in pandas._libs.tslibs.timezones.get_dst_info()
AttributeError: 'NoneType' object has no attribute 'total_seconds'
Exception ignored in: 'pandas._libs.tslibs.tzconversion.tz_convert_from_utc_single'
Traceback (most recent call last):
File "pandas/_libs/tslibs/timezones.pyx", line 266, in pandas._libs.tslibs.timezones.get_dst_info
AttributeError: 'NoneType' object has no attribute 'total_seconds'
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
/var/folders/vp/7ptlp5l934vdh1lvmpgk4qyc0000gn/T/ipykernel_67190/3566591779.py in <module>
5 end_date1 = start_date1 + relativedelta(years=3)
6
----> 7 pd.date_range(start_date1, end_date1-timedelta(days=1), freq='d')
8
9 # Because certain distributions will be a result of combined distributions,
~/Documents/GitHub/virtual/lib/python3.9/site-packages/pandas/core/indexes/datetimes.py in date_range(start, end, periods, freq, tz, normalize, name, closed, **kwargs)
1095 freq = "D"
1096
-> 1097 dtarr = DatetimeArray._generate_range(
1098 start=start,
1099 end=end,
~/Documents/GitHub/virtual/lib/python3.9/site-packages/pandas/core/arrays/datetimes.py in _generate_range(cls, start, end, periods, freq, tz, normalize, ambiguous, nonexistent, closed)
450
451 if tz is not None and index.tz is None:
--> 452 arr = tzconversion.tz_localize_to_utc(
453 index.asi8, tz, ambiguous=ambiguous, nonexistent=nonexistent
454 )
~/Documents/GitHub/virtual/lib/python3.9/site-packages/pandas/_libs/tslibs/tzconversion.pyx in pandas._libs.tslibs.tzconversion.tz_localize_to_utc()
~/Documents/GitHub/virtual/lib/python3.9/site-packages/pandas/_libs/tslibs/timezones.pyx in pandas._libs.tslibs.timezones.get_dst_info()
AttributeError: 'NoneType' object has no attribute 'total_seconds'
Testing the parameters:
start_date==start_date1
and
end_date==end_date1
Both tests result in True.
A:
if understanding correctly you want to create a date range (1D freq) using ZoneInfo…if correct I see a few things going on with your code.
#1 When dealing with datetimes be sure the object is in the correct dtype. I believe datetime64 format will work better.
#2 From the provide code I don’t think ‘strptime’ or ‘replace’ are needed. To access "America/Vancouver" within ZoneInfo you can make it work if you parse start_date1 into years, months, days, hours and minutes.
#3 When start_date1 is parsed, you can add 3 to years (or another number) to create the end date.
The above will create a DatetimeIndex over the specified range.
Datetimes are always tricky. As always you can get to the same destination using different paths…this is just one of them.
start_date_str = '2021-01-01 00:00:00'
start_date_datetime64 = pd.to_datetime(start_date_str) # change dtype to datetime64
year = start_date_datetime64.year
month = start_date_datetime64.month
day = start_date_datetime64.day
hour = start_date_datetime64.hour
minute = start_date_datetime64.minute
start_date_formatted = dt.datetime(year, month, day, hour, minute, tzinfo=ZoneInfo("America/Vancouver"))
end_date_formatted = dt.datetime(year + 3, month, day, hour, minute, tzinfo=ZoneInfo("America/Vancouver"))
result = pd.date_range(start_date_formatted, end_date_formatted-pd.Timedelta(days=1), freq='d')
OUTPUT- DatetimeIndex, dtype='datetime64[ns, America/Vancouver]', length=1095, freq='D')
|
Using zoneinfo with pandas.date_range
|
I am trying to use zoneinfo instead of pytz. I am running into a problem using zoneinfo to initiate dates and passing it on to pd.date_range.
Below is an example of doing the exact same thing with pytz and with zoneinfo. But, while passing it to pd.date_range getting an error with the latter.
pytz example:
start_date = datetime(2021, 1, 1, 0, 0, 0)end_date = datetime(2024, 1, 1, 0, 0, 0) # exclusive end range
pt = pytz.timezone('Canada/Pacific')start_date = pt.localize(start_date)end_date = pt.localize(end_date)
pd.date_range(start_date, end_date-timedelta(days=1), freq='d')
zoneinfo example:
start_date1 = '2021-01-01 00:00:00
start_date1 = datetime.strptime(start_date1, '%Y-%m-%d %H:%M:%S').replace(microsecond=0, second=0, minute=0, tzinfo=ZoneInfo("America/Vancouver"))end_date1 = start_date1 + relativedelta(years=3)
pd.date_range(start_date1, end_date1-timedelta(days=1), freq='d')
Yet, when using zoneinfo I get the following error:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
~/Documents/GitHub/virtual/lib/python3.9/site-packages/pandas/_libs/tslibs/timezones.pyx in pandas._libs.tslibs.timezones.get_dst_info()
AttributeError: 'NoneType' object has no attribute 'total_seconds'
Exception ignored in: 'pandas._libs.tslibs.tzconversion.tz_convert_from_utc_single'
Traceback (most recent call last):
File "pandas/_libs/tslibs/timezones.pyx", line 266, in pandas._libs.tslibs.timezones.get_dst_info
AttributeError: 'NoneType' object has no attribute 'total_seconds'
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
~/Documents/GitHub/virtual/lib/python3.9/site-packages/pandas/_libs/tslibs/timezones.pyx in pandas._libs.tslibs.timezones.get_dst_info()
AttributeError: 'NoneType' object has no attribute 'total_seconds'
Exception ignored in: 'pandas._libs.tslibs.tzconversion.tz_convert_from_utc_single'
Traceback (most recent call last):
File "pandas/_libs/tslibs/timezones.pyx", line 266, in pandas._libs.tslibs.timezones.get_dst_info
AttributeError: 'NoneType' object has no attribute 'total_seconds'
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
/var/folders/vp/7ptlp5l934vdh1lvmpgk4qyc0000gn/T/ipykernel_67190/3566591779.py in <module>
5 end_date1 = start_date1 + relativedelta(years=3)
6
----> 7 pd.date_range(start_date1, end_date1-timedelta(days=1), freq='d')
8
9 # Because certain distributions will be a result of combined distributions,
~/Documents/GitHub/virtual/lib/python3.9/site-packages/pandas/core/indexes/datetimes.py in date_range(start, end, periods, freq, tz, normalize, name, closed, **kwargs)
1095 freq = "D"
1096
-> 1097 dtarr = DatetimeArray._generate_range(
1098 start=start,
1099 end=end,
~/Documents/GitHub/virtual/lib/python3.9/site-packages/pandas/core/arrays/datetimes.py in _generate_range(cls, start, end, periods, freq, tz, normalize, ambiguous, nonexistent, closed)
450
451 if tz is not None and index.tz is None:
--> 452 arr = tzconversion.tz_localize_to_utc(
453 index.asi8, tz, ambiguous=ambiguous, nonexistent=nonexistent
454 )
~/Documents/GitHub/virtual/lib/python3.9/site-packages/pandas/_libs/tslibs/tzconversion.pyx in pandas._libs.tslibs.tzconversion.tz_localize_to_utc()
~/Documents/GitHub/virtual/lib/python3.9/site-packages/pandas/_libs/tslibs/timezones.pyx in pandas._libs.tslibs.timezones.get_dst_info()
AttributeError: 'NoneType' object has no attribute 'total_seconds'
Testing the parameters:
start_date==start_date1
and
end_date==end_date1
Both tests result in True.
|
[
"if understanding correctly you want to create a date range (1D freq) using ZoneInfo…if correct I see a few things going on with your code.\n#1 When dealing with datetimes be sure the object is in the correct dtype. I believe datetime64 format will work better.\n#2 From the provide code I don’t think ‘strptime’ or ‘replace’ are needed. To access \"America/Vancouver\" within ZoneInfo you can make it work if you parse start_date1 into years, months, days, hours and minutes.\n#3 When start_date1 is parsed, you can add 3 to years (or another number) to create the end date.\nThe above will create a DatetimeIndex over the specified range.\nDatetimes are always tricky. As always you can get to the same destination using different paths…this is just one of them.\nstart_date_str = '2021-01-01 00:00:00'\nstart_date_datetime64 = pd.to_datetime(start_date_str) # change dtype to datetime64\n\nyear = start_date_datetime64.year\nmonth = start_date_datetime64.month\nday = start_date_datetime64.day\nhour = start_date_datetime64.hour\nminute = start_date_datetime64.minute\n\nstart_date_formatted = dt.datetime(year, month, day, hour, minute, tzinfo=ZoneInfo(\"America/Vancouver\"))\nend_date_formatted = dt.datetime(year + 3, month, day, hour, minute, tzinfo=ZoneInfo(\"America/Vancouver\"))\n\nresult = pd.date_range(start_date_formatted, end_date_formatted-pd.Timedelta(days=1), freq='d')\n\nOUTPUT- DatetimeIndex, dtype='datetime64[ns, America/Vancouver]', length=1095, freq='D')\n"
] |
[
0
] |
[] |
[] |
[
"python_3.x",
"zoneinfo"
] |
stackoverflow_0074661154_python_3.x_zoneinfo.txt
|
Q:
Downloading Mapbox style packs returns a cancellation error
I am trying to download multiple style packs for offline use by following the example in the Maps SDK for iOS documentation.
In the documentation example, the completion handler handles cancellation errors differently than all other types of errors, like so:
if case StylePackError.canceled = error {
handleCancelation()
} else {
handleFailure()
}
When I call loadStylePack() multiple times simultaneously with the same styleURI, the completion handler is called with a canceled StylePackError.
Is it safe to assume that a cancellation error will occur when attempting to download a styleURI that is already being downloaded? I was unable to find documentation indicating under what conditions a cancellation error can occur.
In other words, should I call loadStylePack() again if it's error type is canceled, or can I assume the data is already loaded?
My question applies to both the iOS and Android SDKs.
A:
It is not safe to assume that a cancellation error will occur when attempting to download a styleURI that is already being downloaded. The conditions under which a cancellation error can occur are not specified in the documentation, so it is difficult to say for certain when a cancellation error will occur.
If a StylePackError.canceled error is returned, it is best to handle it in the same way that you would handle any other error, rather than assuming that the data is already loaded. You may want to consider implementing a mechanism to check whether a given styleURI is already being downloaded before calling loadStylePack() to avoid attempting to download the same styleURI multiple times.
|
Downloading Mapbox style packs returns a cancellation error
|
I am trying to download multiple style packs for offline use by following the example in the Maps SDK for iOS documentation.
In the documentation example, the completion handler handles cancellation errors differently than all other types of errors, like so:
if case StylePackError.canceled = error {
handleCancelation()
} else {
handleFailure()
}
When I call loadStylePack() multiple times simultaneously with the same styleURI, the completion handler is called with a canceled StylePackError.
Is it safe to assume that a cancellation error will occur when attempting to download a styleURI that is already being downloaded? I was unable to find documentation indicating under what conditions a cancellation error can occur.
In other words, should I call loadStylePack() again if it's error type is canceled, or can I assume the data is already loaded?
My question applies to both the iOS and Android SDKs.
|
[
"It is not safe to assume that a cancellation error will occur when attempting to download a styleURI that is already being downloaded. The conditions under which a cancellation error can occur are not specified in the documentation, so it is difficult to say for certain when a cancellation error will occur.\nIf a StylePackError.canceled error is returned, it is best to handle it in the same way that you would handle any other error, rather than assuming that the data is already loaded. You may want to consider implementing a mechanism to check whether a given styleURI is already being downloaded before calling loadStylePack() to avoid attempting to download the same styleURI multiple times.\n"
] |
[
0
] |
[] |
[] |
[
"mapbox",
"mapbox_android",
"mapbox_ios"
] |
stackoverflow_0074661953_mapbox_mapbox_android_mapbox_ios.txt
|
Q:
How to run "sysprep /generalize” in Azure Windows Virtual Machine (VM) from my local machine using Powershell?
I have a windows Azure VM and need to execute “%windir%\system32\sysprep” and then execute “sysprep /generalize” both from admin mode from my local machine through Powershell. How can I do that ?
A:
For your requirements, as I know you can use a PowerShell script to achieve it. First, you can take a look at the Sysprep, it can be run in a PowerShell command C:\WINDOWS\system32\sysprep\sysprep.exe /generalize /shutdown /oobe. Put this command inside a script, then you can use two ways to run this script in the VM from your local machine. One is that use the Invoke command.
In Azure CLI:
az vm run-command invoke --command-id RunPowerShellScript -g group_name -n vm_name --scripts @script.ps1
In PowerShell:
Invoke-AzVMRunCommand -ResourceGroupName 'rgname' -VMName 'vmname' -CommandId 'RunPowerShellScript' -ScriptPath 'sample.ps1'
Another is that use the VM extension. It's a little complex. You can take a look at the Azure PowerShell command Set-AzVMCustomScriptExtension.
Output after running:-
Value[0] :
Code : ComponentStatus/StdOut/succeeded
Level : Info
DisplayStatus : Provisioning succeeded
Message :
Value[1] :
Code : ComponentStatus/StdErr/succeeded
Level : Info
DisplayStatus : Provisioning succeeded
Message :
Status : Succeeded
Capacity : 0
Count : 0
A:
I could't make sysprep work with Invoke-AzVMRunCommand, It run with succeeded status, but the VM was not shutdown.
Finally found https://developercommunity.visualstudio.com/t/devops-sysprep-public-agents/1375989 and it make sense.
So just use Invoke-AzVMRunCommand to run sysprep won't work, I am thinking to reset a local admin user password and run the process as local admin might be a workaround.
|
How to run "sysprep /generalize” in Azure Windows Virtual Machine (VM) from my local machine using Powershell?
|
I have a windows Azure VM and need to execute “%windir%\system32\sysprep” and then execute “sysprep /generalize” both from admin mode from my local machine through Powershell. How can I do that ?
|
[
"For your requirements, as I know you can use a PowerShell script to achieve it. First, you can take a look at the Sysprep, it can be run in a PowerShell command C:\\WINDOWS\\system32\\sysprep\\sysprep.exe /generalize /shutdown /oobe. Put this command inside a script, then you can use two ways to run this script in the VM from your local machine. One is that use the Invoke command.\nIn Azure CLI:\naz vm run-command invoke --command-id RunPowerShellScript -g group_name -n vm_name --scripts @script.ps1\n\nIn PowerShell:\nInvoke-AzVMRunCommand -ResourceGroupName 'rgname' -VMName 'vmname' -CommandId 'RunPowerShellScript' -ScriptPath 'sample.ps1'\n\nAnother is that use the VM extension. It's a little complex. You can take a look at the Azure PowerShell command Set-AzVMCustomScriptExtension. \nOutput after running:-\nValue[0] : \n Code : ComponentStatus/StdOut/succeeded\n Level : Info\n DisplayStatus : Provisioning succeeded\n Message : \nValue[1] : \n Code : ComponentStatus/StdErr/succeeded\n Level : Info\n DisplayStatus : Provisioning succeeded\n Message : \nStatus : Succeeded\nCapacity : 0\nCount : 0\n\n",
"I could't make sysprep work with Invoke-AzVMRunCommand, It run with succeeded status, but the VM was not shutdown.\nFinally found https://developercommunity.visualstudio.com/t/devops-sysprep-public-agents/1375989 and it make sense.\nSo just use Invoke-AzVMRunCommand to run sysprep won't work, I am thinking to reset a local admin user password and run the process as local admin might be a workaround.\n"
] |
[
0,
0
] |
[] |
[] |
[
"azure",
"azure_powershell",
"azure_vm",
"powershell",
"powershell_remoting"
] |
stackoverflow_0061954959_azure_azure_powershell_azure_vm_powershell_powershell_remoting.txt
|
Q:
Finding the root directory of a dependency in NPM
Let's say I have installed awesome-package inside my-app, and let's say the structure looks like:
my-app/
node_modules/
awesome-package/
node_modules/
another-package/
static/
index.js
dist/
index.js
dist/
index.js
Inside my-app/index.js I do require('awesome-package'). Now I want to get the root directory of awesome-package, so I can basically fs.readFileSync something from another-package
How can I get the root directory of a script?
A:
I think require.resolve can be used to achieve that. It will give you the full path to the module. Getting the root directory from that should be easy.
const path = require('path');
let loc = require.resolve('awesome-package');
// most likely something like the following depends on the package
// /path/to/my-app/node_modules/awesome-package/static/index.js
console.log(loc);
// something like the following will give you the root directory
// of the package
console.log(path.join(
loc.substring(0,a.lastIndexOf('node_modules')),
'node_modules',
'awesome-package'
));
|
Finding the root directory of a dependency in NPM
|
Let's say I have installed awesome-package inside my-app, and let's say the structure looks like:
my-app/
node_modules/
awesome-package/
node_modules/
another-package/
static/
index.js
dist/
index.js
dist/
index.js
Inside my-app/index.js I do require('awesome-package'). Now I want to get the root directory of awesome-package, so I can basically fs.readFileSync something from another-package
How can I get the root directory of a script?
|
[
"I think require.resolve can be used to achieve that. It will give you the full path to the module. Getting the root directory from that should be easy.\nconst path = require('path');\nlet loc = require.resolve('awesome-package');\n\n// most likely something like the following depends on the package\n// /path/to/my-app/node_modules/awesome-package/static/index.js\nconsole.log(loc);\n\n// something like the following will give you the root directory\n// of the package\nconsole.log(path.join(\n loc.substring(0,a.lastIndexOf('node_modules')),\n 'node_modules',\n 'awesome-package'\n));\n\n"
] |
[
1
] |
[
"You can use the __dirname global variable to get the directory path of the script that is currently being executed. This variable is provided by Node.js and will contain the absolute path of the script's directory.\nFor example, if you are running your code inside my-app/index.js and you want to get the root directory of awesome-package, you could use the following code:\nconst path = require('path');\n\n// Get the absolute path of the script that is currently being executed\nconst currentScriptPath = __dirname;\n\n// Get the parent directory of the script that is currently being executed\nconst parentDirectory = path.dirname(currentScriptPath);\n\n// Join the parent directory with the name of the package to get the root directory of the package\nconst awesomePackageRoot = path.join(parentDirectory, 'awesome-package');\n\nThis will give you the absolute path to the root directory of awesome-package. You can then use this path to access files within the package, such as another-package, using the fs module. For example:\nconst fs = require('fs');\n\n// Read the index.js file from the \"another-package\" directory\nconst anotherPackageIndex = fs.readFileSync(path.join(awesomePackageRoot, 'node_modules', 'another-package', 'static', 'index.js'));\n\n"
] |
[
-1
] |
[
"node.js"
] |
stackoverflow_0058442451_node.js.txt
|
Q:
Bulk update all taxonomy terms in wordpress
My custom post type "references" has a custom field called "references_count". It has a numeric value.
I have an custom taxonomy called "country" with a custom field called "country_count" for the terms.
Background:
The custom post type "references" saves cities with a number of clients in this city. This value is saved in the field "references_count". In the custom taxonomy there are countries. For each country, there is a total number of references.
Example:
In the city of "Berlin" there are 3 clients. In the city of "Munich" there are 2 clients. The taxonomy term "Germany" includes the sum of all cities in this country. So the value of "country_count" in this example for the taxonomy term "Germany" is 5, being the sum of the references of each city.
I wrote this code which is working, if I'm saving each individual taxonomy term.
add_action( 'edited_country', 'update_counter_for_countries', 10, 2 );
function update_counter_for_countries( $term_id ) {
// Get posts with term
$args = array(
'post_type' => 'reference',
'posts_per_page' => -1,
'tax_query' => array(
array(
'taxonomy' => 'country',
'field' => 'term_id',
'terms' => $term_id
)
)
);
$the_query = new WP_Query( $args );
// sum values in posts
$sumTerm = 0;
if ( $the_query->have_posts() ) {
while ( $the_query->have_posts() ) {
$the_query->the_post();
$number = get_field( 'references_count', get_the_ID() );
$sumTerm = $sumTerm + $number;
}
}
wp_reset_postdata();
// update field in term
update_field( 'country_count', $sumTerm, 'country'.'_'.$term_id );
}
Problem:
I have more than 100 countries (taxonomy terms), so I have to save each term individually to get things going.
What I am looking for: Is there a way to update / save all custom taxonomy terms at once, so I don't have to update each term seperately? I checked out a lot of plugins, but couldn't find any plugin which gives the possibility of "bulk edit" or "bulk save" taxonomy terms. I would prefer a solution without plugin if possible. I am very grateful for any hint, thank you very much.
A:
You can use this code to update all terms in one go.
Just make sure to backup your database in case needed.
This code will loop through all the terms and will only run once. after that you can remove this code.
Just to make this code run only on your IP, change 111.111.111.111 to your IP ADDRESS.
if($_SERVER["REMOTE_ADDR"]=='111.111.111.111'){
//run only my ip
add_action("init","update_all_terms_in_one_go");
}
function update_all_terms_in_one_go(){
session_start();
if(isset($_SESSION['all_terms_updated']) && $_SESSION['all_terms_updated'] == "done"){
return;
}
$taxonomy = "country";
$terms = get_terms([
'taxonomy' => $taxonomy,
'hide_empty' => false,
]);
foreach ($terms as $term) {
update_counter_for_countries( $term->term_id );
}
$_SESSION['all_terms_updated'] = "done";
echo "ALL TAXONOMY TERMS UPDATED";
die();
}
function update_counter_for_countries( $term_id ) {
// Get posts with term
$args = array(
'post_type' => 'reference',
'posts_per_page' => -1,
'tax_query' => array(
array(
'taxonomy' => 'country',
'field' => 'term_id',
'terms' => $term_id
)
)
);
$the_query = new WP_Query( $args );
// sum values in posts
$sumTerm = 0;
if ( $the_query->have_posts() ) {
while ( $the_query->have_posts() ) {
$the_query->the_post();
$number = get_field( 'references_count', get_the_ID() );
$sumTerm = $sumTerm + $number;
}
}
wp_reset_postdata();
// update field in term
update_field( 'country_count', $sumTerm, 'country'.'_'.$term_id );
}
A:
I wanted to have a neat admin page with a button to bulk update all my taxonomy terms. The process should work using AJAX so it can take a while without confusing the user. There should be status messages.
So in the first step I added a new admin page to the wordpress backend.
add_action( 'admin_menu', array( $this, 'my_admin_page' ) );
function my_admin_page() {
add_menu_page(
__('Bulk Terms'), // page title
__('Bulk Terms'), // menu title
'manage_options', // user capabilities
'options-page', // menu slug
'my_output_function', // output function
'dashicons-admin-generic', // menu icon
77 // menu position
);
}
In the output function I put a form with a button to bulk update terms. And some status messages.
function my_output_function() {
echo '<div class="wrap">';
echo '<form action="admin.php?page=options-page" method="post">';
wp_nonce_field( 'ajax_validation', 'nonce' ); // security feature
submit_button('Update Terms', 'primary', 'submitOptions', false);
echo '</form>';
echo '<div id="processing" style="display:none;">Please wait</div>';
echo '<div id="error" style="display:none;">Something went wrong</div>';
echo '<div id="success" style="display:none;">Done!</div>';
echo '</div>';
}
In the second step I had to enqueue script file for ajax calls:
add_action( 'admin_enqueue_scripts', 'my_ajax_scripts' );
function my_ajax_scripts() {
// Check if on specific admin page
global $pagenow;
if (( $pagenow == 'admin.php' ) && ($_GET['page'] == 'options-page')):
wp_enqueue_script( 'ajaxcalls', plugin_dir_url( __FILE__ ).'/js/ajax-calls.js', array('jquery'), '1.0.0', true );
wp_localize_script( 'ajaxcalls', 'ajax_object', array(
'ajaxurl' => admin_url( 'admin-ajax.php' ),
'ajaxnonce' => wp_create_nonce( 'ajax_validation' )
) );
endif;
}
And create the function to bulk update all my taxnomy terms:
function options_page_action() {
$taxonomy = "country";
$terms = get_terms([
'taxonomy' => $taxonomy,
'hide_empty' => false,
]);
foreach ($terms as $term) {
$term_id = $term->term_id;
// Get posts with term
$args = array(
'post_type' => 'reference',
'posts_per_page' => -1,
'tax_query' => array(
array(
'taxonomy' => 'country',
'field' => 'term_id',
'terms' => $term_id
)
)
);
$the_query = new WP_Query( $args );
// sum values in posts
$sumTerm = 0;
if ( $the_query->have_posts() ) {
while ( $the_query->have_posts() ) {
$the_query->the_post();
$number = get_field( 'references_count', get_the_ID() );
$sumTerm = $sumTerm + $number;
}
}
wp_reset_postdata();
// update field in term
update_field( 'country_count', $sumTerm, 'country'.'_'.$term_id );
}
$result = array( 'status' => 'success' ); // create response
wp_send_json_success( $result ); // send response
wp_die(); // close ajax request
}
As the third step, in my ajax_calls.js file I take the click event to call the function for updating the taxonomy terms using ajax.
$( '#submitOptions' ).click( function(event) {
event.preventDefault();
$('#submitOptions').css('cssText','display:none;');
$('#processing').css('cssText','display: block;');
$.ajax({
type: 'POST',
url: ajax_object.ajaxurl,
data: {
action: 'options_page_action',
nonce: ajax_object.ajaxnonce
},
success: function( response ) {
if( response['data']['status'] == 'success' ) {
$('#processing').css('cssText','display:none;');
$('#success').css('cssText','display:block;');
}
},
error: function() {
$('#processing').css('cssText','display:none;');
$('#error').css('cssText','display:block;');
}
});
});
There are messages indicating that the function is running and it show messages when it's done or has an error. This way the user will always know what's going on when bulk updating terms.
|
Bulk update all taxonomy terms in wordpress
|
My custom post type "references" has a custom field called "references_count". It has a numeric value.
I have an custom taxonomy called "country" with a custom field called "country_count" for the terms.
Background:
The custom post type "references" saves cities with a number of clients in this city. This value is saved in the field "references_count". In the custom taxonomy there are countries. For each country, there is a total number of references.
Example:
In the city of "Berlin" there are 3 clients. In the city of "Munich" there are 2 clients. The taxonomy term "Germany" includes the sum of all cities in this country. So the value of "country_count" in this example for the taxonomy term "Germany" is 5, being the sum of the references of each city.
I wrote this code which is working, if I'm saving each individual taxonomy term.
add_action( 'edited_country', 'update_counter_for_countries', 10, 2 );
function update_counter_for_countries( $term_id ) {
// Get posts with term
$args = array(
'post_type' => 'reference',
'posts_per_page' => -1,
'tax_query' => array(
array(
'taxonomy' => 'country',
'field' => 'term_id',
'terms' => $term_id
)
)
);
$the_query = new WP_Query( $args );
// sum values in posts
$sumTerm = 0;
if ( $the_query->have_posts() ) {
while ( $the_query->have_posts() ) {
$the_query->the_post();
$number = get_field( 'references_count', get_the_ID() );
$sumTerm = $sumTerm + $number;
}
}
wp_reset_postdata();
// update field in term
update_field( 'country_count', $sumTerm, 'country'.'_'.$term_id );
}
Problem:
I have more than 100 countries (taxonomy terms), so I have to save each term individually to get things going.
What I am looking for: Is there a way to update / save all custom taxonomy terms at once, so I don't have to update each term seperately? I checked out a lot of plugins, but couldn't find any plugin which gives the possibility of "bulk edit" or "bulk save" taxonomy terms. I would prefer a solution without plugin if possible. I am very grateful for any hint, thank you very much.
|
[
"You can use this code to update all terms in one go.\nJust make sure to backup your database in case needed.\nThis code will loop through all the terms and will only run once. after that you can remove this code.\nJust to make this code run only on your IP, change 111.111.111.111 to your IP ADDRESS.\nif($_SERVER[\"REMOTE_ADDR\"]=='111.111.111.111'){\n//run only my ip\n add_action(\"init\",\"update_all_terms_in_one_go\");\n}\n\nfunction update_all_terms_in_one_go(){\n session_start();\n if(isset($_SESSION['all_terms_updated']) && $_SESSION['all_terms_updated'] == \"done\"){\n return;\n }\n $taxonomy = \"country\";\n $terms = get_terms([\n 'taxonomy' => $taxonomy,\n 'hide_empty' => false,\n ]);\n foreach ($terms as $term) {\n update_counter_for_countries( $term->term_id );\n }\n $_SESSION['all_terms_updated'] = \"done\";\n echo \"ALL TAXONOMY TERMS UPDATED\";\n die();\n\n}\n\nfunction update_counter_for_countries( $term_id ) {\n// Get posts with term\n $args = array(\n 'post_type' => 'reference',\n 'posts_per_page' => -1,\n 'tax_query' => array(\n array(\n 'taxonomy' => 'country',\n 'field' => 'term_id',\n 'terms' => $term_id\n )\n )\n );\n $the_query = new WP_Query( $args );\n\n// sum values in posts\n $sumTerm = 0;\n if ( $the_query->have_posts() ) {\n while ( $the_query->have_posts() ) {\n $the_query->the_post();\n $number = get_field( 'references_count', get_the_ID() );\n $sumTerm = $sumTerm + $number;\n }\n }\n wp_reset_postdata();\n// update field in term\n update_field( 'country_count', $sumTerm, 'country'.'_'.$term_id );\n}\n\n",
"I wanted to have a neat admin page with a button to bulk update all my taxonomy terms. The process should work using AJAX so it can take a while without confusing the user. There should be status messages.\nSo in the first step I added a new admin page to the wordpress backend.\nadd_action( 'admin_menu', array( $this, 'my_admin_page' ) );\n\nfunction my_admin_page() {\n add_menu_page(\n __('Bulk Terms'), // page title\n __('Bulk Terms'), // menu title\n 'manage_options', // user capabilities\n 'options-page', // menu slug\n 'my_output_function', // output function\n 'dashicons-admin-generic', // menu icon\n 77 // menu position\n );\n}\n\nIn the output function I put a form with a button to bulk update terms. And some status messages.\nfunction my_output_function() {\n echo '<div class=\"wrap\">';\n echo '<form action=\"admin.php?page=options-page\" method=\"post\">';\n wp_nonce_field( 'ajax_validation', 'nonce' ); // security feature\n submit_button('Update Terms', 'primary', 'submitOptions', false);\n echo '</form>';\n echo '<div id=\"processing\" style=\"display:none;\">Please wait</div>';\n echo '<div id=\"error\" style=\"display:none;\">Something went wrong</div>';\n echo '<div id=\"success\" style=\"display:none;\">Done!</div>';\n echo '</div>';\n}\n\nIn the second step I had to enqueue script file for ajax calls:\nadd_action( 'admin_enqueue_scripts', 'my_ajax_scripts' );\n\nfunction my_ajax_scripts() {\n // Check if on specific admin page\n global $pagenow;\n if (( $pagenow == 'admin.php' ) && ($_GET['page'] == 'options-page')):\n wp_enqueue_script( 'ajaxcalls', plugin_dir_url( __FILE__ ).'/js/ajax-calls.js', array('jquery'), '1.0.0', true );\n\n wp_localize_script( 'ajaxcalls', 'ajax_object', array(\n 'ajaxurl' => admin_url( 'admin-ajax.php' ),\n 'ajaxnonce' => wp_create_nonce( 'ajax_validation' )\n ) );\n endif;\n}\n\nAnd create the function to bulk update all my taxnomy terms:\nfunction options_page_action() {\n $taxonomy = \"country\";\n $terms = get_terms([\n 'taxonomy' => $taxonomy,\n 'hide_empty' => false,\n ]);\n foreach ($terms as $term) {\n $term_id = $term->term_id;\n // Get posts with term\n $args = array(\n 'post_type' => 'reference',\n 'posts_per_page' => -1,\n 'tax_query' => array(\n array(\n 'taxonomy' => 'country',\n 'field' => 'term_id',\n 'terms' => $term_id\n )\n )\n );\n $the_query = new WP_Query( $args );\n\n // sum values in posts\n $sumTerm = 0;\n if ( $the_query->have_posts() ) {\n while ( $the_query->have_posts() ) {\n $the_query->the_post();\n $number = get_field( 'references_count', get_the_ID() );\n $sumTerm = $sumTerm + $number;\n }\n }\n wp_reset_postdata();\n // update field in term\n update_field( 'country_count', $sumTerm, 'country'.'_'.$term_id );\n }\n $result = array( 'status' => 'success' ); // create response\n wp_send_json_success( $result ); // send response\n wp_die(); // close ajax request\n}\n\nAs the third step, in my ajax_calls.js file I take the click event to call the function for updating the taxonomy terms using ajax.\n$( '#submitOptions' ).click( function(event) {\n event.preventDefault();\n $('#submitOptions').css('cssText','display:none;');\n $('#processing').css('cssText','display: block;');\n $.ajax({\n type: 'POST',\n url: ajax_object.ajaxurl,\n data: {\n action: 'options_page_action',\n nonce: ajax_object.ajaxnonce\n },\n success: function( response ) {\n if( response['data']['status'] == 'success' ) {\n $('#processing').css('cssText','display:none;');\n $('#success').css('cssText','display:block;');\n }\n },\n error: function() {\n $('#processing').css('cssText','display:none;');\n $('#error').css('cssText','display:block;');\n }\n });\n});\n\nThere are messages indicating that the function is running and it show messages when it's done or has an error. This way the user will always know what's going on when bulk updating terms.\n"
] |
[
1,
0
] |
[] |
[] |
[
"bulk",
"custom_post_type",
"custom_taxonomy",
"taxonomy_terms",
"wordpress"
] |
stackoverflow_0074535852_bulk_custom_post_type_custom_taxonomy_taxonomy_terms_wordpress.txt
|
Q:
Firebase Functions with TypeScript - Expected at least 1 arguments, but got 0 or more
I was using Firebase Functions with javascript before and everything was working fine. Now i translated my code to typescript and when i try to update my functions, in one of them it complains about the following error:
Expected at least 1 arguments, but got 0 or more.
The block of code which causes the problem is this one:
size = array.size;
if (size === 0) {
return;
} else {
array.forEach((doc : any) => {
docRefCarsDetails.push(db.collection('cars').doc(doc.get('licensePlate')));
})
return Promise.resolve(db.runTransaction(transaction => {
return Promise.resolve(transaction.getAll(...docRefCarsDetails)); // <-- this is the problem
}))
}
And as you see i even tried to check the size to make sure that it will not happen.
Thanks for helping!
A:
Change
return;
To
return null;
UPDATE
Or try this
db.runTransaction(transaction => {
return transaction.getAll(...docRefCarsDetails);
})
A:
I found a comment that Help me. it is working well.
const corsHandler = cors({ origin: true }); // this is the solution
import cors from "cors";
const corsHandler = cors({ origin: true }); // this is the solution
// allow cors in http function
export const myFunction = functions.https.onRequest((req, res) => {
corsHandler(req, res, async () => {
// your method body
});
});
|
Firebase Functions with TypeScript - Expected at least 1 arguments, but got 0 or more
|
I was using Firebase Functions with javascript before and everything was working fine. Now i translated my code to typescript and when i try to update my functions, in one of them it complains about the following error:
Expected at least 1 arguments, but got 0 or more.
The block of code which causes the problem is this one:
size = array.size;
if (size === 0) {
return;
} else {
array.forEach((doc : any) => {
docRefCarsDetails.push(db.collection('cars').doc(doc.get('licensePlate')));
})
return Promise.resolve(db.runTransaction(transaction => {
return Promise.resolve(transaction.getAll(...docRefCarsDetails)); // <-- this is the problem
}))
}
And as you see i even tried to check the size to make sure that it will not happen.
Thanks for helping!
|
[
"Change\nreturn;\n\nTo\nreturn null;\n\nUPDATE\nOr try this\ndb.runTransaction(transaction => {\n return transaction.getAll(...docRefCarsDetails); \n})\n\n",
"I found a comment that Help me. it is working well.\nconst corsHandler = cors({ origin: true }); // this is the solution\n\nimport cors from \"cors\";\nconst corsHandler = cors({ origin: true }); // this is the solution\n\n// allow cors in http function\nexport const myFunction = functions.https.onRequest((req, res) => {\ncorsHandler(req, res, async () => {\n\n// your method body\n\n });\n});\n\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"firebase",
"google_cloud_functions",
"typescript"
] |
stackoverflow_0054344746_firebase_google_cloud_functions_typescript.txt
|
Q:
PermissionError: [Errno 13] Permission denied when trying to play mp3 with python
I'm trying to play an mp3 with pydub, and I keep getting the error
File "c:\Users\ryanc\Desktop\codefiles\python\audio player.py", line 5, in <module>
play(song)
File "C:\Users\ryanc\AppData\Local\Programs\Python\Python39\lib\site-packages\pydub\playback.py", line 71, in play
_play_with_ffplay(audio_segment)
File "C:\Users\ryanc\AppData\Local\Programs\Python\Python39\lib\site-packages\pydub\playback.py", line 15, in _play_with_ffplay
seg.export(f.name, "wav")
File "C:\Users\ryanc\AppData\Local\Programs\Python\Python39\lib\site-packages\pydub\audio_segment.py", line 867, in export
out_f, _ = _fd_or_path_or_tempfile(out_f, 'wb+')
File "C:\Users\ryanc\AppData\Local\Programs\Python\Python39\lib\site-packages\pydub\utils.py", line 60, in _fd_or_path_or_tempfile
fd = open(fd, mode=mode)
PermissionError: [Errno 13] Permission denied: 'C:\\Users\\ryanc\\AppData\\Local\\Temp\\tmpkdgigv5o.wav'
My code is just
from pydub import AudioSegment
from pydub.playback import play
song = AudioSegment.from_file("C:\\Users\\ryanc\\Music\\rr.mp3")
play(song)
I tried running vscode with admin but that didnt work either.
A:
So it seems that 'pydub' library by default is not able to play .mp3 songs. You will neeed to convert it into .wav format and then execute the command again.
So here is your code with some minor modifications:
from pydub import AudioSegment
from pydub.playback import play
song = AudioSegment.from_mp3("C:\\Users\\ryanc\\Music\\rr.mp3")
play(song)
Now in order to work for this you need to have the ffmpeg installed. If not it will gain throw an error. Download ffmpeg and paste the code to your script directory.
Here is the link to make you better understand the process.
A:
If saving the mp3 to a file isn't a pain for you, then using the audio playback extension is the easiest option.
https://marketplace.visualstudio.com/items?itemName=sukumo28.wav-preview
A:
Try this link..
https://github.com/jiaaro/pydub/issues/209
Adding f.close() line in playback.py to close the stream works magic.
def _play_with_ffplay(seg):
with NamedTemporaryFile("w+b", suffix=".wav") as f:
f.close() # close the file stream
seg.export(f.name, "wav")
subprocess.call([PLAYER, "-nodisp", "-autoexit", "-hide_banner", f.name])
A:
The solution is to install pyAudio from python package
|
PermissionError: [Errno 13] Permission denied when trying to play mp3 with python
|
I'm trying to play an mp3 with pydub, and I keep getting the error
File "c:\Users\ryanc\Desktop\codefiles\python\audio player.py", line 5, in <module>
play(song)
File "C:\Users\ryanc\AppData\Local\Programs\Python\Python39\lib\site-packages\pydub\playback.py", line 71, in play
_play_with_ffplay(audio_segment)
File "C:\Users\ryanc\AppData\Local\Programs\Python\Python39\lib\site-packages\pydub\playback.py", line 15, in _play_with_ffplay
seg.export(f.name, "wav")
File "C:\Users\ryanc\AppData\Local\Programs\Python\Python39\lib\site-packages\pydub\audio_segment.py", line 867, in export
out_f, _ = _fd_or_path_or_tempfile(out_f, 'wb+')
File "C:\Users\ryanc\AppData\Local\Programs\Python\Python39\lib\site-packages\pydub\utils.py", line 60, in _fd_or_path_or_tempfile
fd = open(fd, mode=mode)
PermissionError: [Errno 13] Permission denied: 'C:\\Users\\ryanc\\AppData\\Local\\Temp\\tmpkdgigv5o.wav'
My code is just
from pydub import AudioSegment
from pydub.playback import play
song = AudioSegment.from_file("C:\\Users\\ryanc\\Music\\rr.mp3")
play(song)
I tried running vscode with admin but that didnt work either.
|
[
"So it seems that 'pydub' library by default is not able to play .mp3 songs. You will neeed to convert it into .wav format and then execute the command again.\nSo here is your code with some minor modifications:\nfrom pydub import AudioSegment\nfrom pydub.playback import play\n\nsong = AudioSegment.from_mp3(\"C:\\\\Users\\\\ryanc\\\\Music\\\\rr.mp3\")\nplay(song)\n\nNow in order to work for this you need to have the ffmpeg installed. If not it will gain throw an error. Download ffmpeg and paste the code to your script directory.\nHere is the link to make you better understand the process.\n",
"If saving the mp3 to a file isn't a pain for you, then using the audio playback extension is the easiest option.\nhttps://marketplace.visualstudio.com/items?itemName=sukumo28.wav-preview\n",
"Try this link..\nhttps://github.com/jiaaro/pydub/issues/209\nAdding f.close() line in playback.py to close the stream works magic.\ndef _play_with_ffplay(seg):\nwith NamedTemporaryFile(\"w+b\", suffix=\".wav\") as f:\n f.close() # close the file stream\n seg.export(f.name, \"wav\")\n subprocess.call([PLAYER, \"-nodisp\", \"-autoexit\", \"-hide_banner\", f.name])\n\n",
"The solution is to install pyAudio from python package\n"
] |
[
1,
0,
0,
0
] |
[] |
[] |
[
"pydub",
"python"
] |
stackoverflow_0069323707_pydub_python.txt
|
Q:
Json to csv with cyrillic
This is my php code:
<?php
$jsonData = file_get_contents("1.json");
$jsonDecoded = json_decode($jsonData);
$csv = '1.csv';
$fileCsv = fopen($csv, 'w');
foreach($jsonDecoded as $i){
fputcsv($fileCsv, $i);
}
fclose($fileCsv);
?>
In 1.json I have data written in cyrillic. Whene 1.csv is opened via Excel, there is a problem with decoding it. It shows me non-cyrillic random symbols. Why is it so? How can I fix it?
I'm not sure where this problem comes from. It could be just problem with Excel? I'm using Excel 2016.
The desired ecxel:
How it actually looks like:
A:
A utf8 CSV file has a Byte Order Mark as its first three octets. These are the hex values 0xEF, 0xBB, 0xBF. So you can do:
$fileCsv = fopen($csv, 'w');
fprintf($fileCsv, chr(0xEF).chr(0xBB).chr(0xBF));
foreach($jsonDecoded as $i){
fputcsv($fileCsv, $i);
}
fclose($fileCsv);
|
Json to csv with cyrillic
|
This is my php code:
<?php
$jsonData = file_get_contents("1.json");
$jsonDecoded = json_decode($jsonData);
$csv = '1.csv';
$fileCsv = fopen($csv, 'w');
foreach($jsonDecoded as $i){
fputcsv($fileCsv, $i);
}
fclose($fileCsv);
?>
In 1.json I have data written in cyrillic. Whene 1.csv is opened via Excel, there is a problem with decoding it. It shows me non-cyrillic random symbols. Why is it so? How can I fix it?
I'm not sure where this problem comes from. It could be just problem with Excel? I'm using Excel 2016.
The desired ecxel:
How it actually looks like:
|
[
"A utf8 CSV file has a Byte Order Mark as its first three octets. These are the hex values 0xEF, 0xBB, 0xBF. So you can do:\n$fileCsv = fopen($csv, 'w');\nfprintf($fileCsv, chr(0xEF).chr(0xBB).chr(0xBF));\nforeach($jsonDecoded as $i){\n fputcsv($fileCsv, $i);\n}\n\nfclose($fileCsv);\n\n"
] |
[
0
] |
[] |
[] |
[
"csv",
"cyrillic",
"export_to_csv",
"json",
"php"
] |
stackoverflow_0074661965_csv_cyrillic_export_to_csv_json_php.txt
|
Q:
How to display a number of selected items from a tableView Cell on a ViewController navigation title?
I am trying to display the number of selected items from a UIbuttton on Tableviewcell on the title nav on a ViewController . It works however you'd have to exit the screen and return back to see the number of selected items. How can I resolve this
https://gifyu.com/image/Sh5gs
class DealerCardTableViewCell: UITableViewCell {
@IBOutlet weak var selectDealerButton: UIButton!
var dealerCount = 0{
didSet{
dealerListDelegate?.didUpdateDealerCount(dealerCount: dealerCount)
}
}
@objc func selectDealer() {
guard let account = account else { return }
selectDealerButton.isSelected = !selectDealerButton.isSelected
dealerCount+=selectDealerButton.isSelected ? 1 : -1
}
class DealerListViewController: DealerCardTableViewController {
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
didUpdateDealerCount(dealerCount: 0)
}
func didUpdateDealerCount(dealerCount: Int){
navigationItem.title = "Selected Dealers - \(selectedAccounts.count + dealerCount)"
}
override func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell {
guard let cell = super.tableView(tableView, cellForRowAt: indexPath) as? DealerCardTableViewCell, accounts.count > 0 else { return UITableViewCell() }
cell.delegate = self
}
A:
You can move the NavigationTitle update method inside the selectDealerButton action and it will update title every time you select or deselect a dealer.
class DealerCardTableViewCell: UITableViewCell {
@IBOutlet weak var selectDealerButton: UIButton!
var dealerCount = 0
@objc func selectDealer() {
guard let account = account else { return }
selectDealerButton.isSelected = !selectDealerButton.isSelected
dealerCount+=selectDealerButton.isSelected ? 1 : -1
didUpdateDealerCount(dealerCount: dealerCount)
}
class DealerListViewController: DealerCardTableViewController {
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
}
func didUpdateDealerCount(dealerCount: Int){
navigationItem.title = "Selected Dealers - \(selectedAccounts.count + dealerCount)"
}
override func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell {
guard let cell = super.tableView(tableView, cellForRowAt: indexPath) as? DealerCardTableViewCell, accounts.count > 0 else { return UITableViewCell() }
cell.delegate = self
}
|
How to display a number of selected items from a tableView Cell on a ViewController navigation title?
|
I am trying to display the number of selected items from a UIbuttton on Tableviewcell on the title nav on a ViewController . It works however you'd have to exit the screen and return back to see the number of selected items. How can I resolve this
https://gifyu.com/image/Sh5gs
class DealerCardTableViewCell: UITableViewCell {
@IBOutlet weak var selectDealerButton: UIButton!
var dealerCount = 0{
didSet{
dealerListDelegate?.didUpdateDealerCount(dealerCount: dealerCount)
}
}
@objc func selectDealer() {
guard let account = account else { return }
selectDealerButton.isSelected = !selectDealerButton.isSelected
dealerCount+=selectDealerButton.isSelected ? 1 : -1
}
class DealerListViewController: DealerCardTableViewController {
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
didUpdateDealerCount(dealerCount: 0)
}
func didUpdateDealerCount(dealerCount: Int){
navigationItem.title = "Selected Dealers - \(selectedAccounts.count + dealerCount)"
}
override func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell {
guard let cell = super.tableView(tableView, cellForRowAt: indexPath) as? DealerCardTableViewCell, accounts.count > 0 else { return UITableViewCell() }
cell.delegate = self
}
|
[
"You can move the NavigationTitle update method inside the selectDealerButton action and it will update title every time you select or deselect a dealer.\nclass DealerCardTableViewCell: UITableViewCell {\n @IBOutlet weak var selectDealerButton: UIButton!\n\n var dealerCount = 0\n\n @objc func selectDealer() {\n guard let account = account else { return }\n \n selectDealerButton.isSelected = !selectDealerButton.isSelected\n dealerCount+=selectDealerButton.isSelected ? 1 : -1\n didUpdateDealerCount(dealerCount: dealerCount)\n }\n\n\nclass DealerListViewController: DealerCardTableViewController {\n\n override func viewWillAppear(_ animated: Bool) {\n super.viewWillAppear(animated)\n }\n\nfunc didUpdateDealerCount(dealerCount: Int){\n navigationItem.title = \"Selected Dealers - \\(selectedAccounts.count + dealerCount)\"\n}\n\n\n\noverride func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell {\n guard let cell = super.tableView(tableView, cellForRowAt: indexPath) as? DealerCardTableViewCell, accounts.count > 0 else { return UITableViewCell() }\n\n cell.delegate = self\n\n}\n\n"
] |
[
1
] |
[] |
[] |
[
"ios",
"swift"
] |
stackoverflow_0074661599_ios_swift.txt
|
Q:
Less than in bitwise operation
I have a task to practice and I just can't get the solution. It should be noted that I am a total beginner. There is 1min for the exercise, so it should be child's play. But I am racking my brains over it.
I am supposed to write an expression for the following :
Let x and k be variables of type int. Formulate a C condition with bitwise operator that is true exactly when x is less than 2^(k-1).
I would have done this with the ?-operator. So something like
( ) ? 1 : 0
But which condition must be in the brackets? I can't think of a bitwise operator that can be used to compare less than?
I really don't have any Idea :/
A:
It seems you mean something like the following
if ( x < 1 << k - 1 ) { /*...*/ }
provided that the expression k - 1 is not negative and the expression 1 << k - 1 is representable as a positive value in an object of the type int.
A:
Let x and k be variables of type int.
int x;
int k;
Formulate a C condition with bitwise operator that is true exactly when x is less than 2^(k-1).
Start with directly translating it:
if ( x < pow(2,k-1) ) { ... }
Then remove the call to pow using bitwise operators
2-to-the-power-of k-1 is: 1<<(k-1)
Examples: to help clarify
2 ^ 3 == 1 << 3 == 8
2 ^ 5 == 1 << 5 == 32
So the final expression is:
( x < 1<<(k-1) ); // Result will either be 1 or 0
There is no need to use ternary ?: operator.
The boolean expression < will evaluate to 1 or 0 by itself!
A:
TL;DR
!(x >> k-1)
This is simple enough to be a viable answer that someone might come up with given one minute of time.
It only works within certain domains for x and k.
Coming up with a solution that covers the full integer domain, is a lot tougher.
The question lacks requirements, so I'll have to make a few assumptions.
For one, "a C condition with bitwise operator" could mean:
at least one bitwise operator
only bitwise operators (i.e. no other operators)
as many bitwise operators as possible (i.e. as few other operators as possible)
Most answers seem to go for #1, and use a comparison operator (<) to do the actual 'less than' test.
Assuming this is an assignment to teach you about the binary representation of integer numbers,
comparison operators should be out of scope.
Not in the least because comparison with a power of two is a simple bit mask test.
For example, this works for k-1 = 8:
(x & ~0xFF) == 0
As demonstrated by other posters,
a bit shift operator can be used to dynamically construct the bit mask from k,
but it's actually a lot smarter to bit-shift x itself in the opposite direction.
Basically, we divide x by 2^(k-1).
If the (truncated) result is zero, then obviously, x is less than 2^(k-1).
(x >> k-1) == 0
or:
!(x >> k-1)
or (avoiding the arithmetic operator -):
!(x << 1 >> k)
or (avoiding the logical operator ! as well):
(~(x<<1 | x | x>>1 | x>>2 | x>>3 | ... | x>>31) >> k) & 1
The latter demands knowledge of the width (in bits) of type int; assuming 32 bits here.
All of the above work only within certain domains for both x and k.
x must be non-negative; my expression will incorrectly return false for negative numbers.
k must be non-negative, and less than the width (in bits) of int (the type of x); my expression suffers from UB otherwise.
It is possible to make it work for wider domains.
I doubt it is on-topic for a one-minute assignment, but nevertheless it makes an interesting exercise.
To support negative x, just use INT_MIN as a bit mask to test the sign bit.
(x & INT_MIN) | ~((x<<1 | x | x>>1 | x>>2 | x>>3 | ... | x>>31) >> k) & 1
Or if you insist on always having 'true' represented by 1:
(x >> 31 | ~(x<<1 | x | x>>1 | x>>2 | x>>3 | ... | x>>31) >> k) & 1
(Again assuming 32 bits; if different, change the literal 31 accordingly.)
If k is negative, then every positive x should yield false.
Please note that this is identical to the behavior for zero k.
So just replace k with (k & ~k>>31) (the bitwise equivalent of (k < 0 ? 0 : k) in 32-bits architecture).
(x>>31 | ~(x<<1 | x | x>>1 | x>>2 | x>>3 | ... | x>>31) >> (k & ~k>>31)) & 1
For k greater than or equal to the size (in bits) of type int, every x should yield true:
(((k>>5 | k>>6 | k>>7 | ... | k>>30) & ~k>>31) | x>>31 | ~(x<<1 | x | x>>1 | x>>2 | x>>3 | ... | x>>31) >> (k & ~k>>31)) & 1
k>>5 as the first term may seem obscure. 5 is the logarithm of 32; in a 64-bits architecture we'd start with k>>6:
(((k>>6 | k>>7 | k>>8 | ... | k>>62) & ~k>>63) | x>>63 | ~(x<<1 | x | x>>1 | x>>2 | x>>3 | ... | x>>63) >> (k & ~k>>63)) & 1
If it would be acceptable to use not only bitwise, but also logical operators,
then the expression (here 32 bits) could be simplified to:
((k & ~31) && ~k>>31) || (x >> 31) || !(x << 1 >> (k & ~k>>31))
Here is a simple test (32 bits):
#include <stdio.h>
#include <limits.h>
static int purely_bitwise(int x, int k)
{
return (
((
k>>5 | k>>6 | k>>7 | k>>8 | k>>9 |
k>>10 | k>>11 | k>>12 | k>>13 | k>>14 | k>>15 | k>>16 | k>>17 | k>>18 | k>>19 |
k>>20 | k>>21 | k>>22 | k>>23 | k>>24 | k>>25 | k>>26 | k>>27 | k>>28 | k>>29 |
k>>30
) & ~k>>31) |
x>>31 |
~(
x<<1 | x | x>>1 | x>>2 | x>>3 | x>>4 | x>>5 | x>>6 | x>>7 | x>>8 | x>>9 |
x>>10 | x>>11 | x>>12 | x>>13 | x>>14 | x>>15 | x>>16 | x>>17 | x>>18 | x>>19 |
x>>20 | x>>21 | x>>22 | x>>23 | x>>24 | x>>25 | x>>26 | x>>27 | x>>28 | x>>29 |
x>>30 | x>>31
) >> (k & ~k>>31)
) & 1;
}
static int bitwise_and_logical(int x, int k)
{
return ((k & ~31) && ~k>>31) || (x >> 31) || !(x << 1 >> (k & ~k>>31));
}
static int comparison(int x, int k)
{
return k >= 32 || x < (k < 1 ? 1 : 1 << (k-1));
}
static int values[] = {
INT_MIN, INT_MIN+1, INT_MIN/2-1, INT_MIN/2, INT_MIN/2+1, -99, -9, -3, -2, -1,
0, 1, 2, 3, 9, 99, INT_MAX/2-1, INT_MAX/2, INT_MAX/2+1, INT_MAX-1, INT_MAX
};
#define LEN_VALUES (sizeof values / sizeof *values)
int main()
{
for (int ik = 0; ik < LEN_VALUES; ik++)
{
for (int ix = 0; ix < LEN_VALUES; ix++)
{
int x = values[ix];
int k = values[ik];
int actual1 = purely_bitwise(x, k);
int actual2 = bitwise_and_logical(x, k);
int expected = comparison(x, k);
if (actual1 != expected || actual2 != expected)
{
printf("x = %11d, k = %11d: expected = %d, actual = %d, %d\n",
x, k, expected, actual1, actual2);
}
}
}
return 0;
}
The program outputs nothing, which is good, as every line of output would be a failed test case.
|
Less than in bitwise operation
|
I have a task to practice and I just can't get the solution. It should be noted that I am a total beginner. There is 1min for the exercise, so it should be child's play. But I am racking my brains over it.
I am supposed to write an expression for the following :
Let x and k be variables of type int. Formulate a C condition with bitwise operator that is true exactly when x is less than 2^(k-1).
I would have done this with the ?-operator. So something like
( ) ? 1 : 0
But which condition must be in the brackets? I can't think of a bitwise operator that can be used to compare less than?
I really don't have any Idea :/
|
[
"It seems you mean something like the following\nif ( x < 1 << k - 1 ) { /*...*/ }\n\nprovided that the expression k - 1 is not negative and the expression 1 << k - 1 is representable as a positive value in an object of the type int.\n",
"\nLet x and k be variables of type int.\n\nint x;\nint k;\n\n\nFormulate a C condition with bitwise operator that is true exactly when x is less than 2^(k-1).\n\nStart with directly translating it:\nif ( x < pow(2,k-1) ) { ... }\n\nThen remove the call to pow using bitwise operators\n2-to-the-power-of k-1 is: 1<<(k-1)\nExamples: to help clarify\n2 ^ 3 == 1 << 3 == 8\n2 ^ 5 == 1 << 5 == 32\n\n\nSo the final expression is:\n( x < 1<<(k-1) ); // Result will either be 1 or 0\n\nThere is no need to use ternary ?: operator. \nThe boolean expression < will evaluate to 1 or 0 by itself!\n",
"TL;DR\n!(x >> k-1)\n\nThis is simple enough to be a viable answer that someone might come up with given one minute of time.\nIt only works within certain domains for x and k.\nComing up with a solution that covers the full integer domain, is a lot tougher.\n\nThe question lacks requirements, so I'll have to make a few assumptions.\nFor one, \"a C condition with bitwise operator\" could mean:\n\nat least one bitwise operator\nonly bitwise operators (i.e. no other operators)\nas many bitwise operators as possible (i.e. as few other operators as possible)\n\nMost answers seem to go for #1, and use a comparison operator (<) to do the actual 'less than' test.\nAssuming this is an assignment to teach you about the binary representation of integer numbers,\ncomparison operators should be out of scope.\nNot in the least because comparison with a power of two is a simple bit mask test.\nFor example, this works for k-1 = 8:\n(x & ~0xFF) == 0\n\nAs demonstrated by other posters,\na bit shift operator can be used to dynamically construct the bit mask from k,\nbut it's actually a lot smarter to bit-shift x itself in the opposite direction.\nBasically, we divide x by 2^(k-1).\nIf the (truncated) result is zero, then obviously, x is less than 2^(k-1).\n(x >> k-1) == 0\n\nor:\n!(x >> k-1)\n\nor (avoiding the arithmetic operator -):\n!(x << 1 >> k)\n\nor (avoiding the logical operator ! as well):\n(~(x<<1 | x | x>>1 | x>>2 | x>>3 | ... | x>>31) >> k) & 1\n\nThe latter demands knowledge of the width (in bits) of type int; assuming 32 bits here.\nAll of the above work only within certain domains for both x and k.\n\nx must be non-negative; my expression will incorrectly return false for negative numbers.\nk must be non-negative, and less than the width (in bits) of int (the type of x); my expression suffers from UB otherwise.\n\nIt is possible to make it work for wider domains.\nI doubt it is on-topic for a one-minute assignment, but nevertheless it makes an interesting exercise.\nTo support negative x, just use INT_MIN as a bit mask to test the sign bit.\n(x & INT_MIN) | ~((x<<1 | x | x>>1 | x>>2 | x>>3 | ... | x>>31) >> k) & 1\n\nOr if you insist on always having 'true' represented by 1:\n(x >> 31 | ~(x<<1 | x | x>>1 | x>>2 | x>>3 | ... | x>>31) >> k) & 1\n\n(Again assuming 32 bits; if different, change the literal 31 accordingly.)\nIf k is negative, then every positive x should yield false.\nPlease note that this is identical to the behavior for zero k.\nSo just replace k with (k & ~k>>31) (the bitwise equivalent of (k < 0 ? 0 : k) in 32-bits architecture).\n(x>>31 | ~(x<<1 | x | x>>1 | x>>2 | x>>3 | ... | x>>31) >> (k & ~k>>31)) & 1\n\nFor k greater than or equal to the size (in bits) of type int, every x should yield true:\n(((k>>5 | k>>6 | k>>7 | ... | k>>30) & ~k>>31) | x>>31 | ~(x<<1 | x | x>>1 | x>>2 | x>>3 | ... | x>>31) >> (k & ~k>>31)) & 1\n\nk>>5 as the first term may seem obscure. 5 is the logarithm of 32; in a 64-bits architecture we'd start with k>>6:\n(((k>>6 | k>>7 | k>>8 | ... | k>>62) & ~k>>63) | x>>63 | ~(x<<1 | x | x>>1 | x>>2 | x>>3 | ... | x>>63) >> (k & ~k>>63)) & 1\n\nIf it would be acceptable to use not only bitwise, but also logical operators,\nthen the expression (here 32 bits) could be simplified to:\n((k & ~31) && ~k>>31) || (x >> 31) || !(x << 1 >> (k & ~k>>31))\n\nHere is a simple test (32 bits):\n#include <stdio.h>\n#include <limits.h>\n\nstatic int purely_bitwise(int x, int k)\n{\n return (\n ((\n k>>5 | k>>6 | k>>7 | k>>8 | k>>9 |\n k>>10 | k>>11 | k>>12 | k>>13 | k>>14 | k>>15 | k>>16 | k>>17 | k>>18 | k>>19 |\n k>>20 | k>>21 | k>>22 | k>>23 | k>>24 | k>>25 | k>>26 | k>>27 | k>>28 | k>>29 |\n k>>30\n ) & ~k>>31) |\n x>>31 |\n ~(\n x<<1 | x | x>>1 | x>>2 | x>>3 | x>>4 | x>>5 | x>>6 | x>>7 | x>>8 | x>>9 |\n x>>10 | x>>11 | x>>12 | x>>13 | x>>14 | x>>15 | x>>16 | x>>17 | x>>18 | x>>19 |\n x>>20 | x>>21 | x>>22 | x>>23 | x>>24 | x>>25 | x>>26 | x>>27 | x>>28 | x>>29 |\n x>>30 | x>>31\n ) >> (k & ~k>>31)\n ) & 1;\n}\n\nstatic int bitwise_and_logical(int x, int k)\n{\n return ((k & ~31) && ~k>>31) || (x >> 31) || !(x << 1 >> (k & ~k>>31));\n}\n\nstatic int comparison(int x, int k)\n{\n return k >= 32 || x < (k < 1 ? 1 : 1 << (k-1));\n}\n\nstatic int values[] = {\n INT_MIN, INT_MIN+1, INT_MIN/2-1, INT_MIN/2, INT_MIN/2+1, -99, -9, -3, -2, -1,\n 0, 1, 2, 3, 9, 99, INT_MAX/2-1, INT_MAX/2, INT_MAX/2+1, INT_MAX-1, INT_MAX\n};\n#define LEN_VALUES (sizeof values / sizeof *values)\n\nint main()\n{\n for (int ik = 0; ik < LEN_VALUES; ik++)\n {\n for (int ix = 0; ix < LEN_VALUES; ix++)\n {\n int x = values[ix];\n int k = values[ik];\n int actual1 = purely_bitwise(x, k);\n int actual2 = bitwise_and_logical(x, k);\n int expected = comparison(x, k);\n if (actual1 != expected || actual2 != expected)\n {\n printf(\"x = %11d, k = %11d: expected = %d, actual = %d, %d\\n\",\n x, k, expected, actual1, actual2);\n }\n }\n }\n return 0;\n}\n\nThe program outputs nothing, which is good, as every line of output would be a failed test case.\n"
] |
[
2,
1,
0
] |
[] |
[] |
[
"bit_manipulation",
"c",
"less",
"operator_keyword"
] |
stackoverflow_0074647061_bit_manipulation_c_less_operator_keyword.txt
|
Q:
Why doesn't the abort function in Flask take the handlers?
I am developing a REST API with python and flask, I leave the project here Github project
I added error handlers to the application but when I run an abort function, it gives me a default message from Flask, not the structure I am defining.
I will leave the path to the handlers and where I run the abort from.
Handlers abort(400)
Flask message
A:
Ok, the solution was told to me that it could be in another question.
What to do is to overwrite the handler function of the Flask Api object.
With that, you can configure the format with which each query will be answered, even the ones that contain an error.
def response_structure(code_status: int, response=None, message=None):
if code_status == 200 or code_status == 201:
status = 'Success'
else:
status = 'Error'
args = dict()
args['status'] = status
if message is not None:
args['message'] = message
if response is not None:
args['response'] = response
return args, code_status
class ExtendAPI(Api):
def handle_error(self, e):
return response_structure(e.code, str(e))
Once the function is overwritten, you must use this new one to create
users_bp = Blueprint('users', __name__)
api = ExtendAPI(users_bp)
With this, we can then use the flask functions to respond with the structure that we define.
if request.args.get('name') is None:
abort(400)
Response JSON
{
"response": "400 Bad Request: The browser (or proxy) sent a request that this server could not understand.",
"status": "Error"
}
|
Why doesn't the abort function in Flask take the handlers?
|
I am developing a REST API with python and flask, I leave the project here Github project
I added error handlers to the application but when I run an abort function, it gives me a default message from Flask, not the structure I am defining.
I will leave the path to the handlers and where I run the abort from.
Handlers abort(400)
Flask message
|
[
"Ok, the solution was told to me that it could be in another question.\nWhat to do is to overwrite the handler function of the Flask Api object.\nWith that, you can configure the format with which each query will be answered, even the ones that contain an error.\ndef response_structure(code_status: int, response=None, message=None):\n if code_status == 200 or code_status == 201:\n status = 'Success'\n else:\n status = 'Error'\n\n args = dict()\n args['status'] = status\n if message is not None:\n args['message'] = message\n\n if response is not None:\n args['response'] = response\n\n return args, code_status\n\n\nclass ExtendAPI(Api):\n\n def handle_error(self, e):\n return response_structure(e.code, str(e))\n\nOnce the function is overwritten, you must use this new one to create\nusers_bp = Blueprint('users', __name__)\napi = ExtendAPI(users_bp)\n\nWith this, we can then use the flask functions to respond with the structure that we define.\nif request.args.get('name') is None:\n abort(400)\n\nResponse JSON\n{\n \"response\": \"400 Bad Request: The browser (or proxy) sent a request that this server could not understand.\",\n \"status\": \"Error\"\n}\n\n"
] |
[
0
] |
[] |
[] |
[
"flask",
"python"
] |
stackoverflow_0074650833_flask_python.txt
|
Q:
Is declaring a new intger inside a loop changes the space complexity?
Is declaring a new intger inside a loop changes the space complexity of the metohd?
for exampe if i'm looking at the follwoing 2 methods, is both of the methods space complexity is O(1)? or in the first method becuase I'm declaring the variable c over and over until the loop end it's space complexity is O(n)?
public static int what (int []a) {
int temp = 0;
for (int i = 0; i < a.length; i++) {
for (int j = i; j < a.length; j++) {
**int c = f(a, i, j);**
if (c % 2 == 0) {
if (j - i + 1 > temp)
temp = j - i + 1;
}
}
}
return temp;
}
public static int what (int []a) {
int temp = 0;
**int c;**
for (int i = 0; i < a.length; i++) {
for (int j = i; j < a.length; j++) {
**c = f(a, i, j);**
if (c % 2 == 0) {
if (j - i + 1 > temp)
temp = j - i + 1;
}
}
}
return temp;
}
Not sure if it's relevant to the question but also attahced the f method.
private static int f (int[]a, int low, int high)
{
int res = 0;
for (int i=low; i<=high; i++)
res += a[i];
return res;
}
A:
Declaring a new integer inside a loop does not change the space complexity of the method. In both of the methods you provided, the space complexity is O(1), because the space required does not depend on the size of the input array a. This is because the only variables that are declared in the method are temp, i, j, and c, and their sizes do not depend on the size of the input array.
In the first method, c is declared inside the inner loop, and it is reassigned on each iteration of the loop. However, this does not affect the space complexity of the method, because the size of c is constant and does not depend on the size of the input array.
In the second method, c is declared outside of the loops, and it is assigned the result of the f method on each iteration of the inner loop. Again, this does not affect the space complexity of the method, because the size of c is constant and does not depend on the size of the input array.
In general, the space complexity of a method is determined by the amount of memory required to store the variables that are used in the method, and how this memory usage depends on the size of the input. In both of the methods you provided, the space required to store the variables is constant, so the space complexity of the methods is O(1).
|
Is declaring a new intger inside a loop changes the space complexity?
|
Is declaring a new intger inside a loop changes the space complexity of the metohd?
for exampe if i'm looking at the follwoing 2 methods, is both of the methods space complexity is O(1)? or in the first method becuase I'm declaring the variable c over and over until the loop end it's space complexity is O(n)?
public static int what (int []a) {
int temp = 0;
for (int i = 0; i < a.length; i++) {
for (int j = i; j < a.length; j++) {
**int c = f(a, i, j);**
if (c % 2 == 0) {
if (j - i + 1 > temp)
temp = j - i + 1;
}
}
}
return temp;
}
public static int what (int []a) {
int temp = 0;
**int c;**
for (int i = 0; i < a.length; i++) {
for (int j = i; j < a.length; j++) {
**c = f(a, i, j);**
if (c % 2 == 0) {
if (j - i + 1 > temp)
temp = j - i + 1;
}
}
}
return temp;
}
Not sure if it's relevant to the question but also attahced the f method.
private static int f (int[]a, int low, int high)
{
int res = 0;
for (int i=low; i<=high; i++)
res += a[i];
return res;
}
|
[
"Declaring a new integer inside a loop does not change the space complexity of the method. In both of the methods you provided, the space complexity is O(1), because the space required does not depend on the size of the input array a. This is because the only variables that are declared in the method are temp, i, j, and c, and their sizes do not depend on the size of the input array.\nIn the first method, c is declared inside the inner loop, and it is reassigned on each iteration of the loop. However, this does not affect the space complexity of the method, because the size of c is constant and does not depend on the size of the input array.\nIn the second method, c is declared outside of the loops, and it is assigned the result of the f method on each iteration of the inner loop. Again, this does not affect the space complexity of the method, because the size of c is constant and does not depend on the size of the input array.\nIn general, the space complexity of a method is determined by the amount of memory required to store the variables that are used in the method, and how this memory usage depends on the size of the input. In both of the methods you provided, the space required to store the variables is constant, so the space complexity of the methods is O(1).\n"
] |
[
1
] |
[] |
[] |
[
"complexity_theory",
"space_complexity"
] |
stackoverflow_0074661690_complexity_theory_space_complexity.txt
|
Q:
Why do u get last number of seconds stuck after i pass 1h mark
I decide to make clock in C as my first "Project", Changing minutes after 60 sec mark went well but when i need to change hours after minutes my secon number of seconds wont reset so it stayis with number 9 stuck until i go to the point of 10 sec.
#include<stdio.h>
#include<Windows.h>
int main ()
{
int h = 0,m = 59, s = 55;
int prekid = 1;
while (prekid == 1)
{
printf(" \r H:%d | M: %d | S: %d",h,m,s);
s++;
fflush(stdout);
if (s == 60)
{
m++;
s= 0;
}
if (m == 60)
{
m = 0;
s= 0;
h++;
}
sleep(1);
}
return 0;
}
A:
Reason for your issue: when line gets shorter, characters from previous, longer line remain on screen.
Solution 1, add enough spaces, 3 in this case I believe, at the end of format string to overwrite extra numbers:
printf(" \r H:%d | M: %d | S: %d ",h,m,s);
Solution 2, write fixed width output, here with leading zeroes:
printf(" \r H:%02d | M: %02d | S: %02d",h,m,s);
A:
Rathe than print a variable width of text (and leave lefts-overs that caused OP's troubles), print a fixed width:
// printf(" \r H:%d | M: %d | S: %d",h,m,s);
printf("\r H:%2d | M:%2d | S:%2d", h, m, s);
"%2d" directs printf() to print at least 2 characters, padding on the left with spaces as needed.
|
Why do u get last number of seconds stuck after i pass 1h mark
|
I decide to make clock in C as my first "Project", Changing minutes after 60 sec mark went well but when i need to change hours after minutes my secon number of seconds wont reset so it stayis with number 9 stuck until i go to the point of 10 sec.
#include<stdio.h>
#include<Windows.h>
int main ()
{
int h = 0,m = 59, s = 55;
int prekid = 1;
while (prekid == 1)
{
printf(" \r H:%d | M: %d | S: %d",h,m,s);
s++;
fflush(stdout);
if (s == 60)
{
m++;
s= 0;
}
if (m == 60)
{
m = 0;
s= 0;
h++;
}
sleep(1);
}
return 0;
}
|
[
"Reason for your issue: when line gets shorter, characters from previous, longer line remain on screen.\nSolution 1, add enough spaces, 3 in this case I believe, at the end of format string to overwrite extra numbers:\nprintf(\" \\r H:%d | M: %d | S: %d \",h,m,s);\n\nSolution 2, write fixed width output, here with leading zeroes:\nprintf(\" \\r H:%02d | M: %02d | S: %02d\",h,m,s);\n\n",
"Rathe than print a variable width of text (and leave lefts-overs that caused OP's troubles), print a fixed width:\n// printf(\" \\r H:%d | M: %d | S: %d\",h,m,s);\nprintf(\"\\r H:%2d | M:%2d | S:%2d\", h, m, s);\n\n\"%2d\" directs printf() to print at least 2 characters, padding on the left with spaces as needed.\n"
] |
[
2,
2
] |
[] |
[] |
[
"c"
] |
stackoverflow_0074661780_c.txt
|
Q:
Access Cloudflare Worker from local environments
I've setup a functional Cloudflare Worker via its route and domain and am using the Worker playground and the quick editor to avoid a deployment.
However, when developing locally I cannot make a request to the worker and get a CORs error.
I’ve read all the docs and implemented most CF security features within Zero Trust. However, nothing is getting us access to our deployed Worker due to strict CORs rules. (which we want)
On my machine I am routing through WARP and it is configured for my
team name.
I have installed and configured a root access certificate, perhaps
not applicable to this issue.
I have also tried to manually auth by visiting the worker URL and
getting a login code emailed to me. Perhaps CF Access is not related
to Workers?
We need clarification because the docs do not clearly explain the flow for access to Worker URLs when working on localhost.
Community question here.
How do we develop apps with Workers and strict CORs by authenticating a computer or user?
A:
I think you can use Transform Rules for set/remove/update CORS.
It should work for you, because according to traffic sequence diagram header modifications performs before workers.
|
Access Cloudflare Worker from local environments
|
I've setup a functional Cloudflare Worker via its route and domain and am using the Worker playground and the quick editor to avoid a deployment.
However, when developing locally I cannot make a request to the worker and get a CORs error.
I’ve read all the docs and implemented most CF security features within Zero Trust. However, nothing is getting us access to our deployed Worker due to strict CORs rules. (which we want)
On my machine I am routing through WARP and it is configured for my
team name.
I have installed and configured a root access certificate, perhaps
not applicable to this issue.
I have also tried to manually auth by visiting the worker URL and
getting a login code emailed to me. Perhaps CF Access is not related
to Workers?
We need clarification because the docs do not clearly explain the flow for access to Worker URLs when working on localhost.
Community question here.
How do we develop apps with Workers and strict CORs by authenticating a computer or user?
|
[
"I think you can use Transform Rules for set/remove/update CORS.\nIt should work for you, because according to traffic sequence diagram header modifications performs before workers. \n"
] |
[
1
] |
[] |
[] |
[
"cloudflare",
"microservices",
"worker"
] |
stackoverflow_0074563666_cloudflare_microservices_worker.txt
|
Q:
Why we don't need to add semicolon to the end of jsx?
I'm new to React and JSX, just a question on JSX syntax, we know we can do like:
export default function App() {
return <h1>Hello World</h1>
}
but don't we need to add semicolon to the end of jsx ? As:
export default function App() {
return <h1>Hello World</h1>;
}
A:
JSX doesn't affect anything in this scenario compared to normal JS syntax.
export default function App() {
return <h1>Hello World</h1>;
//^^^^^^ ^
// ^^^^^^^^^^^^^^^^^^^^
// 1 2 3
//keyword expression end of statement
}
When writing code using JSX syntax, each JSX expression is simply an expression, like any other expression. More specifically, each JSX expression is evaluated by the transpiler and converted into a function invocation expression — you can read more at the article JSX In Depth on the React website.
Whether you return a JSX value or a number value or a string value makes no difference in regard to the need for a semicolon.
const myValue = "hello world";
export default function () {
return myValue;
//^^^^^^ ^
// ^^^^^^^
// 1 2 3
//keyword ^ end of statement
// expression
}
Regarding semicolon usage: I won't get into the reasons why semicolons should be used — that has already been covered on this site.
A:
In javascript semicolons are optional in most situations because they are inserted automatically.
There are not many situations when you actually have to use them and there is an ongoing discussion between developers whether to write semicolons explicitly or not.
See http://www.bradoncode.com/blog/2015/08/26/javascript-semi-colon-insertion/ for more details.
|
Why we don't need to add semicolon to the end of jsx?
|
I'm new to React and JSX, just a question on JSX syntax, we know we can do like:
export default function App() {
return <h1>Hello World</h1>
}
but don't we need to add semicolon to the end of jsx ? As:
export default function App() {
return <h1>Hello World</h1>;
}
|
[
"JSX doesn't affect anything in this scenario compared to normal JS syntax.\nexport default function App() {\n return <h1>Hello World</h1>;\n//^^^^^^ ^\n// ^^^^^^^^^^^^^^^^^^^^\n// 1 2 3\n//keyword expression end of statement\n}\n\nWhen writing code using JSX syntax, each JSX expression is simply an expression, like any other expression. More specifically, each JSX expression is evaluated by the transpiler and converted into a function invocation expression — you can read more at the article JSX In Depth on the React website.\nWhether you return a JSX value or a number value or a string value makes no difference in regard to the need for a semicolon.\nconst myValue = \"hello world\";\n\nexport default function () {\n return myValue;\n//^^^^^^ ^\n// ^^^^^^^\n// 1 2 3\n//keyword ^ end of statement\n// expression\n}\n\nRegarding semicolon usage: I won't get into the reasons why semicolons should be used — that has already been covered on this site.\n",
"In javascript semicolons are optional in most situations because they are inserted automatically.\nThere are not many situations when you actually have to use them and there is an ongoing discussion between developers whether to write semicolons explicitly or not.\nSee http://www.bradoncode.com/blog/2015/08/26/javascript-semi-colon-insertion/ for more details.\n"
] |
[
2,
0
] |
[] |
[] |
[
"jsx",
"reactjs"
] |
stackoverflow_0058320777_jsx_reactjs.txt
|
Q:
How to hide/show a register with djangoadmin?
I have a model with customers
in my front I have a combobox with the list of my customers.
how I can in djangoadmin hide or show this customers?
I think it could be a checkbox like "this customer is active/inactive"
but how can I solve this with djangoAdmin?
I think it could be a checkbox like "this customer is active/inactive"
but how can I solve this with djangoAdmin?
A:
You can add the field in your customer column called is_active as boolean filed and then in the objects manager you add this code:
def get_queryset(self):
return super().get_queryset().filter(is_active=True)
and when you will mark the object is_active as False that will be disappreared from the django admin.
|
How to hide/show a register with djangoadmin?
|
I have a model with customers
in my front I have a combobox with the list of my customers.
how I can in djangoadmin hide or show this customers?
I think it could be a checkbox like "this customer is active/inactive"
but how can I solve this with djangoAdmin?
I think it could be a checkbox like "this customer is active/inactive"
but how can I solve this with djangoAdmin?
|
[
"You can add the field in your customer column called is_active as boolean filed and then in the objects manager you add this code:\n def get_queryset(self):\n return super().get_queryset().filter(is_active=True)\n\nand when you will mark the object is_active as False that will be disappreared from the django admin.\n"
] |
[
0
] |
[] |
[] |
[
"django",
"django_forms",
"django_models",
"django_queryset",
"user_controls"
] |
stackoverflow_0074660959_django_django_forms_django_models_django_queryset_user_controls.txt
|
Q:
Power Automate Desktop Fill Data from excel to website but multi UI ELEMENT
I am a newbie working on power automate desktop and sorry if my english is not good. Currently I am making a flow that will populate the website from my excel. Excel I will have a list of lists I want it to populate in order. But I am having a problem that when I finish filling in the first position, it will show the second position and go on to third , ..... The HTML Input ID also changes in ascending order. So when I do this Flow, it only executes at the top of all my list from excel. Is there a way that can automatically jump the column according to that line UI element. Thank you
enter image description here
Input text will increase like adGroupInfo.1.adcontent and then you fill that one it will show your second and i want to fill data from my excel to that it will be adgroupInfo.2.adcontent
enter image description here
It same all DIV, SPAN ,But different Input ID
enter image description here
enter image description here
A:
For this you will make use of a variable.
Dummy data:
Navigate to the element in the element list.
The click on the 3 dots menu to see the options.
Select Edit
Then scroll down to the enabled selector, this will in most cases be the id of the element on the web page.
replace the value with the name of the variable that will hold the control id value
Then in your loop assign the value of the UI element to the variable you used as the identifier.
Line 7 (red 2)
This should then fill the boxes as required.
|
Power Automate Desktop Fill Data from excel to website but multi UI ELEMENT
|
I am a newbie working on power automate desktop and sorry if my english is not good. Currently I am making a flow that will populate the website from my excel. Excel I will have a list of lists I want it to populate in order. But I am having a problem that when I finish filling in the first position, it will show the second position and go on to third , ..... The HTML Input ID also changes in ascending order. So when I do this Flow, it only executes at the top of all my list from excel. Is there a way that can automatically jump the column according to that line UI element. Thank you
enter image description here
Input text will increase like adGroupInfo.1.adcontent and then you fill that one it will show your second and i want to fill data from my excel to that it will be adgroupInfo.2.adcontent
enter image description here
It same all DIV, SPAN ,But different Input ID
enter image description here
enter image description here
|
[
"For this you will make use of a variable.\nDummy data:\n\nNavigate to the element in the element list.\nThe click on the 3 dots menu to see the options.\nSelect Edit\n\n\nThen scroll down to the enabled selector, this will in most cases be the id of the element on the web page.\n\nreplace the value with the name of the variable that will hold the control id value\n\nThen in your loop assign the value of the UI element to the variable you used as the identifier.\nLine 7 (red 2)\n\nThis should then fill the boxes as required.\n\n"
] |
[
0
] |
[] |
[] |
[
"power_automate_desktop"
] |
stackoverflow_0073674939_power_automate_desktop.txt
|
Q:
Jenkins Xray Integration - Jira Issue Type with wrong character
In my Jennkins pipeline I have a Jira/Xray integration step :
step([$class: 'XrayImportBuilder',
endpointName: '/xunit',
fixVersion: '1.0',
importFilePath: '/MyFirstUnitTests/TestResults.xml',
importToSameExecution: 'true',
testExecKey: 'TSTLKS-753',
serverInstance: '9146a388-e399-4e55-be28-8c65404d6f9d',
credentialId:'75287529-134d-4s91-9964-7h740d8d2i63'])
Currently I'm having the following error :
ERROR: Unable to confirm Result of the upload..... Upload Failed!
Status:400 Response:{"error":"Issue with key
\u0027TSTLKS-753\u0027 does not exist or is not of type Test
Execution."}
But my issue (TSTLKS-753) is of type "Test Execution":
It appears that the string "\u0027" is being added both as a prefix
and as a suffix on my issue when building the pipeline.
I've searched for this string and it appears to be a Quotation Mark:
I tried out replacing it by double quotes. But I end up with the same error. Also tried to remove them.
In any case, if someone already got this error please let me know. Thank you very much
A:
Can you confirm that the user that you have configured in Jenkins for the Xray instance has access to that Jira project where you have your Test Execution issue?
Can you try to import it without specifying testExecKey field, with importToSameExecution: 'false', and specifying the projectKey field using something like projectKey: 'TSTLKS' ?
If this last option returns an error (e.g. "project does not exist") then it's for sure a permission issue, so you'll either need to use a different Jira user/pass or fix the permissions on Jira side.
|
Jenkins Xray Integration - Jira Issue Type with wrong character
|
In my Jennkins pipeline I have a Jira/Xray integration step :
step([$class: 'XrayImportBuilder',
endpointName: '/xunit',
fixVersion: '1.0',
importFilePath: '/MyFirstUnitTests/TestResults.xml',
importToSameExecution: 'true',
testExecKey: 'TSTLKS-753',
serverInstance: '9146a388-e399-4e55-be28-8c65404d6f9d',
credentialId:'75287529-134d-4s91-9964-7h740d8d2i63'])
Currently I'm having the following error :
ERROR: Unable to confirm Result of the upload..... Upload Failed!
Status:400 Response:{"error":"Issue with key
\u0027TSTLKS-753\u0027 does not exist or is not of type Test
Execution."}
But my issue (TSTLKS-753) is of type "Test Execution":
It appears that the string "\u0027" is being added both as a prefix
and as a suffix on my issue when building the pipeline.
I've searched for this string and it appears to be a Quotation Mark:
I tried out replacing it by double quotes. But I end up with the same error. Also tried to remove them.
In any case, if someone already got this error please let me know. Thank you very much
|
[
"Can you confirm that the user that you have configured in Jenkins for the Xray instance has access to that Jira project where you have your Test Execution issue?\nCan you try to import it without specifying testExecKey field, with importToSameExecution: 'false', and specifying the projectKey field using something like projectKey: 'TSTLKS' ?\nIf this last option returns an error (e.g. \"project does not exist\") then it's for sure a permission issue, so you'll either need to use a different Jira user/pass or fix the permissions on Jira side.\n"
] |
[
0
] |
[] |
[] |
[
"jenkins",
"jenkins_pipeline",
"jira",
"jira_xray",
"x_ray"
] |
stackoverflow_0074646112_jenkins_jenkins_pipeline_jira_jira_xray_x_ray.txt
|
Q:
Is there a way to make the calendar read-only for non-admins (AAD)
In our scenario, users get authenticated via free Azure AD. We would like to make the modal window (AddEditForm) hidden for all non-admins. Since it's only a couple of admins, it's OK to use the admin userids in a condition, such as:
var uname = User.Identity.GetUserId()
if uname == '[email protected]', uname== '[email protected]' { $('#myModal').modal() }
This is for a quick proof of concept only. Thanks!
A:
It sounds like you want to show a modal window only to certain users who are authenticated with Azure AD. There are a few different ways you could approach this problem.
One approach would be to check the user's claims after they authenticate with Azure AD. Azure AD provides a number of different claims about the authenticated user, including their email address and their role within the organization. You could use these claims to determine whether or not to show the modal window.
Another approach would be to use Azure AD groups to manage access to the modal window. You could create an Azure AD group for the users who should be able to see the modal window, and then check if the authenticated user is a member of that group before showing the modal. This would be a more scalable solution if you have a larger number of users who need access to the modal.
It's important to keep in mind that the approach you're currently using, where you hard-code a list of user email addresses in your code, is not a secure or maintainable solution. It would be better to use one of the approaches I mentioned above, or to find another solution that is more appropriate for your use case.
|
Is there a way to make the calendar read-only for non-admins (AAD)
|
In our scenario, users get authenticated via free Azure AD. We would like to make the modal window (AddEditForm) hidden for all non-admins. Since it's only a couple of admins, it's OK to use the admin userids in a condition, such as:
var uname = User.Identity.GetUserId()
if uname == '[email protected]', uname== '[email protected]' { $('#myModal').modal() }
This is for a quick proof of concept only. Thanks!
|
[
"It sounds like you want to show a modal window only to certain users who are authenticated with Azure AD. There are a few different ways you could approach this problem.\nOne approach would be to check the user's claims after they authenticate with Azure AD. Azure AD provides a number of different claims about the authenticated user, including their email address and their role within the organization. You could use these claims to determine whether or not to show the modal window.\nAnother approach would be to use Azure AD groups to manage access to the modal window. You could create an Azure AD group for the users who should be able to see the modal window, and then check if the authenticated user is a member of that group before showing the modal. This would be a more scalable solution if you have a larger number of users who need access to the modal.\nIt's important to keep in mind that the approach you're currently using, where you hard-code a list of user email addresses in your code, is not a secure or maintainable solution. It would be better to use one of the approaches I mentioned above, or to find another solution that is more appropriate for your use case.\n"
] |
[
0
] |
[] |
[] |
[
"fullcalendar"
] |
stackoverflow_0074661552_fullcalendar.txt
|
Q:
How to solve my error in redshiftsinkconnector
I try to connect kafka and redshift in the redshiftsink connector.the connector is running and the task is
enter image description here failed .
A:
Your error - Failed to deserialize data in topic ... to Avro
So, if your data is not Avro, then change your key.converter and/or value.converter to the appropriate config. You need to consult your Producer code for the matching serializers.
|
How to solve my error in redshiftsinkconnector
|
I try to connect kafka and redshift in the redshiftsink connector.the connector is running and the task is
enter image description here failed .
|
[
"Your error - Failed to deserialize data in topic ... to Avro\nSo, if your data is not Avro, then change your key.converter and/or value.converter to the appropriate config. You need to consult your Producer code for the matching serializers.\n"
] |
[
0
] |
[] |
[] |
[
"amazon_redshift",
"apache_kafka",
"apache_kafka_connect"
] |
stackoverflow_0074641991_amazon_redshift_apache_kafka_apache_kafka_connect.txt
|
Q:
C# and Python result difference - basic Math
So I have tried the same math in c# and python but got 2 different answer. can someone please explain why is this happening.
def test():
l = 50
r = 3
gg= l + (r - l) / 2
mid = l + (r - l) // 2
print(mid)
print(gg)
public void test()
{
var l = 50;
var r = 3;
var gg = l + (r - l) / 2;
double x = l + (r - l) / 2;
var mid = Math.Floor(x);
Console.WriteLine(mid);
Console.WriteLine(gg);
}
A:
In C#, the / operator performs integer division (ignores the fractional part) when both values are type int. For example, 3 / 2 = 1, since the fractional part (0.5) is dropped.
As a result, in your equation, the operation (r - l) / 2 is evaluating to -23, since (3 - 50) / 2 = -47 / 2 = -23 (again, the fractional part is dropped). Then, 50 + (-23) = 27.
However, Python does not do this. By default, all division, whether between integers or doubles, is "normal" division - the fractional part is kept. Because of that, the result is the same as you'd get on a calculator: 50 + (3 - 50) / 2 = 26.5
If you want C# to calculate this the same way as Python, the easiest way is to make one of the numbers a double. Adding .0 to the end of the divisor should do the trick:
// changed '2' to '2.0'
var gg = l + (r - l) / 2.0;
double x = l + (r - l) / 2.0;
26
26.5
|
C# and Python result difference - basic Math
|
So I have tried the same math in c# and python but got 2 different answer. can someone please explain why is this happening.
def test():
l = 50
r = 3
gg= l + (r - l) / 2
mid = l + (r - l) // 2
print(mid)
print(gg)
public void test()
{
var l = 50;
var r = 3;
var gg = l + (r - l) / 2;
double x = l + (r - l) / 2;
var mid = Math.Floor(x);
Console.WriteLine(mid);
Console.WriteLine(gg);
}
|
[
"In C#, the / operator performs integer division (ignores the fractional part) when both values are type int. For example, 3 / 2 = 1, since the fractional part (0.5) is dropped.\nAs a result, in your equation, the operation (r - l) / 2 is evaluating to -23, since (3 - 50) / 2 = -47 / 2 = -23 (again, the fractional part is dropped). Then, 50 + (-23) = 27.\nHowever, Python does not do this. By default, all division, whether between integers or doubles, is \"normal\" division - the fractional part is kept. Because of that, the result is the same as you'd get on a calculator: 50 + (3 - 50) / 2 = 26.5\nIf you want C# to calculate this the same way as Python, the easiest way is to make one of the numbers a double. Adding .0 to the end of the divisor should do the trick:\n// changed '2' to '2.0'\nvar gg = l + (r - l) / 2.0;\ndouble x = l + (r - l) / 2.0;\n\n\n26\n26.5\n\n"
] |
[
0
] |
[] |
[] |
[
"c#",
"floor_division",
"math",
"python"
] |
stackoverflow_0074501942_c#_floor_division_math_python.txt
|
Q:
Find all combinations of n1 elements from vector1 and n2 elements from vector2 in R?
I have two vectors and I am trying to find all unique combinations of 3 elements from vector1 and 2 elements from vector2. I have tried the following code.
V1 = combn(1:5, 3) # 10 combinations in total
V2 = combn(6:11, 2) # 15 combinations in total
How to combine V1 and V2 so that there are 10 * 15 = 150 combinations in total? Thanks.
A:
The function comboGrid from RcppAlgos (I am the author) does just the trick:
library(RcppAlgos)
grid <- comboGrid(c(rep(list(1:5), 3), rep(list(6:11), 2)),
repetition = FALSE)
head(grid)
#> Var1 Var2 Var3 Var4 Var5
#> [1,] 1 2 3 6 7
#> [2,] 1 2 3 6 8
#> [3,] 1 2 3 6 9
#> [4,] 1 2 3 6 10
#> [5,] 1 2 3 6 11
#> [6,] 1 2 3 7 8
tail(grid)
#> Var1 Var2 Var3 Var4 Var5
#> [145,] 3 4 5 8 9
#> [146,] 3 4 5 8 10
#> [147,] 3 4 5 8 11
#> [148,] 3 4 5 9 10
#> [149,] 3 4 5 9 11
#> [150,] 3 4 5 10 11
It is quite efficient as well. It is written in C++ and pulls together many ideas from the excellent question: Picking unordered combinations from pools with overlap. The underlying algorithm avoids generating duplicates that would need to be filtered out.
Consider the following example where generating the Cartesian product contains more than 10 billion results:
system.time(huge <- comboGrid(c(rep(list(1:20), 5), rep(list(21:35), 3)),
repetition = FALSE))
#> user system elapsed
#> 0.990 0.087 1.077
dim(huge)
#> [1] 7054320 8
A:
You can try expand.grid along with asplit, e.g.,
expand.grid(asplit(V1,2), asplit(V2,2))
or
with(
expand.grid(asplit(V1, 2), asplit(V2, 2)),
t(mapply(c, Var1, Var2))
)
A:
You can use expand.grid():
g <- expand.grid(seq_len(ncol(V1)), seq_len(ncol(V2)))
V3 <- rbind(V1[, g[, 1]], V2[, g[, 2]])
The result is in a similar format as V1 and V2, i.e. a 5 × 150 matrix (here printed transposed):
head(t(V3))
# [,1] [,2] [,3] [,4] [,5]
# [1,] 1 2 3 6 7
# [2,] 1 2 4 6 7
# [3,] 1 2 5 6 7
# [4,] 1 3 4 6 7
# [5,] 1 3 5 6 7
# [6,] 1 4 5 6 7
dim(unique(t(V3)))
# [1] 150 5
And a generalized approach that can handle more than two initial matrices of combinations, stored in a list V:
V <- list(V1, V2)
g <- do.call(expand.grid, lapply(V, \(x) seq_len(ncol(x))))
V.comb <- do.call(rbind, mapply('[', V, T, g))
identical(V.comb, V3)
[1] TRUE
A:
After some helpful refactoring guidance from @onyambu, here is a shorter solution based on base::merge():
merge(t(combn(1:5, 3)),t(combn(6:11, 2)),by.x=NULL,by.y = NULL)
...and the first 20 rows of output:
> merge(t(combn(1:5, 3)),t(combn(6:11, 2)),by.x=NULL,by.y = NULL)
V1.x V2.x V3 V1.y V2.y
1 1 2 3 6 7
2 1 2 4 6 7
3 1 2 5 6 7
4 1 3 4 6 7
5 1 3 5 6 7
6 1 4 5 6 7
7 2 3 4 6 7
8 2 3 5 6 7
9 2 4 5 6 7
10 3 4 5 6 7
11 1 2 3 6 8
12 1 2 4 6 8
13 1 2 5 6 8
14 1 3 4 6 8
15 1 3 5 6 8
16 1 4 5 6 8
17 2 3 4 6 8
18 2 3 5 6 8
19 2 4 5 6 8
20 3 4 5 6 8
original solution
A base R solution to create a Cartesian product with merge() looks like this:
df1 <- data.frame(t(combn(1:5, 3)))
df2 <- data.frame(t(combn(6:11, 2)))
colnames(df2) <- paste("y",1:2,sep=""))
merge(df1,df2,by.x=NULL,by.y = NULL)
...and the first 25 rows of output:
> merge(df1,df2,by.x=NULL,by.y = NULL)
X1 X2 X3 y1 y2
1 1 2 3 6 7
2 1 2 4 6 7
3 1 2 5 6 7
4 1 3 4 6 7
5 1 3 5 6 7
6 1 4 5 6 7
7 2 3 4 6 7
8 2 3 5 6 7
9 2 4 5 6 7
10 3 4 5 6 7
11 1 2 3 6 8
12 1 2 4 6 8
13 1 2 5 6 8
14 1 3 4 6 8
15 1 3 5 6 8
16 1 4 5 6 8
17 2 3 4 6 8
18 2 3 5 6 8
19 2 4 5 6 8
20 3 4 5 6 8
21 1 2 3 6 9
22 1 2 4 6 9
23 1 2 5 6 9
24 1 3 4 6 9
25 1 3 5 6 9
A:
Similar idea, using apply
apply(expand.grid(seq(ncol(V1)), seq(ncol(V2))), 1, function(i) {
c(V1[,i[1]], V2[,i[2]])})
#> [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10] [,11] [,12] [,13] [,14]
#> [1,] 1 1 1 1 1 1 2 2 2 3 1 1 1 1
#> [2,] 2 2 2 3 3 4 3 3 4 4 2 2 2 3
#> [3,] 3 4 5 4 5 5 4 5 5 5 3 4 5 4
#> [4,] 6 6 6 6 6 6 6 6 6 6 6 6 6 6
#> [5,] 7 7 7 7 7 7 7 7 7 7 8 8 8 8
#> [,15] [,16] [,17] [,18] [,19] [,20] [,21] [,22] [,23] [,24] [,25] [,26]
#> [1,] 1 1 2 2 2 3 1 1 1 1 1 1
#> [2,] 3 4 3 3 4 4 2 2 2 3 3 4
#> [3,] 5 5 4 5 5 5 3 4 5 4 5 5
#> [4,] 6 6 6 6 6 6 6 6 6 6 6 6
#> [5,] 8 8 8 8 8 8 9 9 9 9 9 9
#> [,27] [,28] [,29] [,30] [,31] [,32] [,33] [,34] [,35] [,36] [,37] [,38]
#> [1,] 2 2 2 3 1 1 1 1 1 1 2 2
#> [2,] 3 3 4 4 2 2 2 3 3 4 3 3
#> [3,] 4 5 5 5 3 4 5 4 5 5 4 5
#> [4,] 6 6 6 6 6 6 6 6 6 6 6 6
#> [5,] 9 9 9 9 10 10 10 10 10 10 10 10
#> [,39] [,40] [,41] [,42] [,43] [,44] [,45] [,46] [,47] [,48] [,49] [,50]
#> [1,] 2 3 1 1 1 1 1 1 2 2 2 3
#> [2,] 4 4 2 2 2 3 3 4 3 3 4 4
#> [3,] 5 5 3 4 5 4 5 5 4 5 5 5
#> [4,] 6 6 6 6 6 6 6 6 6 6 6 6
#> [5,] 10 10 11 11 11 11 11 11 11 11 11 11
#> [,51] [,52] [,53] [,54] [,55] [,56] [,57] [,58] [,59] [,60] [,61] [,62]
#> [1,] 1 1 1 1 1 1 2 2 2 3 1 1
#> [2,] 2 2 2 3 3 4 3 3 4 4 2 2
#> [3,] 3 4 5 4 5 5 4 5 5 5 3 4
#> [4,] 7 7 7 7 7 7 7 7 7 7 7 7
#> [5,] 8 8 8 8 8 8 8 8 8 8 9 9
#> [,63] [,64] [,65] [,66] [,67] [,68] [,69] [,70] [,71] [,72] [,73] [,74]
#> [1,] 1 1 1 1 2 2 2 3 1 1 1 1
#> [2,] 2 3 3 4 3 3 4 4 2 2 2 3
#> [3,] 5 4 5 5 4 5 5 5 3 4 5 4
#> [4,] 7 7 7 7 7 7 7 7 7 7 7 7
#> [5,] 9 9 9 9 9 9 9 9 10 10 10 10
#> [,75] [,76] [,77] [,78] [,79] [,80] [,81] [,82] [,83] [,84] [,85] [,86]
#> [1,] 1 1 2 2 2 3 1 1 1 1 1 1
#> [2,] 3 4 3 3 4 4 2 2 2 3 3 4
#> [3,] 5 5 4 5 5 5 3 4 5 4 5 5
#> [4,] 7 7 7 7 7 7 7 7 7 7 7 7
#> [5,] 10 10 10 10 10 10 11 11 11 11 11 11
#> [,87] [,88] [,89] [,90] [,91] [,92] [,93] [,94] [,95] [,96] [,97] [,98]
#> [1,] 2 2 2 3 1 1 1 1 1 1 2 2
#> [2,] 3 3 4 4 2 2 2 3 3 4 3 3
#> [3,] 4 5 5 5 3 4 5 4 5 5 4 5
#> [4,] 7 7 7 7 8 8 8 8 8 8 8 8
#> [5,] 11 11 11 11 9 9 9 9 9 9 9 9
#> [,99] [,100] [,101] [,102] [,103] [,104] [,105] [,106] [,107] [,108]
#> [1,] 2 3 1 1 1 1 1 1 2 2
#> [2,] 4 4 2 2 2 3 3 4 3 3
#> [3,] 5 5 3 4 5 4 5 5 4 5
#> [4,] 8 8 8 8 8 8 8 8 8 8
#> [5,] 9 9 10 10 10 10 10 10 10 10
#> [,109] [,110] [,111] [,112] [,113] [,114] [,115] [,116] [,117] [,118]
#> [1,] 2 3 1 1 1 1 1 1 2 2
#> [2,] 4 4 2 2 2 3 3 4 3 3
#> [3,] 5 5 3 4 5 4 5 5 4 5
#> [4,] 8 8 8 8 8 8 8 8 8 8
#> [5,] 10 10 11 11 11 11 11 11 11 11
#> [,119] [,120] [,121] [,122] [,123] [,124] [,125] [,126] [,127] [,128]
#> [1,] 2 3 1 1 1 1 1 1 2 2
#> [2,] 4 4 2 2 2 3 3 4 3 3
#> [3,] 5 5 3 4 5 4 5 5 4 5
#> [4,] 8 8 9 9 9 9 9 9 9 9
#> [5,] 11 11 10 10 10 10 10 10 10 10
#> [,129] [,130] [,131] [,132] [,133] [,134] [,135] [,136] [,137] [,138]
#> [1,] 2 3 1 1 1 1 1 1 2 2
#> [2,] 4 4 2 2 2 3 3 4 3 3
#> [3,] 5 5 3 4 5 4 5 5 4 5
#> [4,] 9 9 9 9 9 9 9 9 9 9
#> [5,] 10 10 11 11 11 11 11 11 11 11
#> [,139] [,140] [,141] [,142] [,143] [,144] [,145] [,146] [,147] [,148]
#> [1,] 2 3 1 1 1 1 1 1 2 2
#> [2,] 4 4 2 2 2 3 3 4 3 3
#> [3,] 5 5 3 4 5 4 5 5 4 5
#> [4,] 9 9 10 10 10 10 10 10 10 10
#> [5,] 11 11 11 11 11 11 11 11 11 11
#> [,149] [,150]
#> [1,] 2 3
#> [2,] 4 4
#> [3,] 5 5
#> [4,] 10 10
#> [5,] 11 11
Created on 2022-12-02 with reprex v2.0.2
|
Find all combinations of n1 elements from vector1 and n2 elements from vector2 in R?
|
I have two vectors and I am trying to find all unique combinations of 3 elements from vector1 and 2 elements from vector2. I have tried the following code.
V1 = combn(1:5, 3) # 10 combinations in total
V2 = combn(6:11, 2) # 15 combinations in total
How to combine V1 and V2 so that there are 10 * 15 = 150 combinations in total? Thanks.
|
[
"The function comboGrid from RcppAlgos (I am the author) does just the trick:\nlibrary(RcppAlgos)\n\ngrid <- comboGrid(c(rep(list(1:5), 3), rep(list(6:11), 2)),\n repetition = FALSE)\n\nhead(grid)\n#> Var1 Var2 Var3 Var4 Var5\n#> [1,] 1 2 3 6 7\n#> [2,] 1 2 3 6 8\n#> [3,] 1 2 3 6 9\n#> [4,] 1 2 3 6 10\n#> [5,] 1 2 3 6 11\n#> [6,] 1 2 3 7 8\n\ntail(grid)\n#> Var1 Var2 Var3 Var4 Var5\n#> [145,] 3 4 5 8 9\n#> [146,] 3 4 5 8 10\n#> [147,] 3 4 5 8 11\n#> [148,] 3 4 5 9 10\n#> [149,] 3 4 5 9 11\n#> [150,] 3 4 5 10 11\n\nIt is quite efficient as well. It is written in C++ and pulls together many ideas from the excellent question: Picking unordered combinations from pools with overlap. The underlying algorithm avoids generating duplicates that would need to be filtered out.\nConsider the following example where generating the Cartesian product contains more than 10 billion results:\nsystem.time(huge <- comboGrid(c(rep(list(1:20), 5), rep(list(21:35), 3)),\n repetition = FALSE))\n#> user system elapsed \n#> 0.990 0.087 1.077\n\ndim(huge)\n#> [1] 7054320 8\n\n",
"You can try expand.grid along with asplit, e.g.,\nexpand.grid(asplit(V1,2), asplit(V2,2))\n\nor\nwith(\n expand.grid(asplit(V1, 2), asplit(V2, 2)),\n t(mapply(c, Var1, Var2))\n)\n\n",
"You can use expand.grid():\ng <- expand.grid(seq_len(ncol(V1)), seq_len(ncol(V2)))\nV3 <- rbind(V1[, g[, 1]], V2[, g[, 2]])\n\nThe result is in a similar format as V1 and V2, i.e. a 5 × 150 matrix (here printed transposed):\nhead(t(V3))\n# [,1] [,2] [,3] [,4] [,5]\n# [1,] 1 2 3 6 7\n# [2,] 1 2 4 6 7\n# [3,] 1 2 5 6 7\n# [4,] 1 3 4 6 7\n# [5,] 1 3 5 6 7\n# [6,] 1 4 5 6 7\n\ndim(unique(t(V3)))\n# [1] 150 5\n\nAnd a generalized approach that can handle more than two initial matrices of combinations, stored in a list V:\nV <- list(V1, V2)\ng <- do.call(expand.grid, lapply(V, \\(x) seq_len(ncol(x))))\nV.comb <- do.call(rbind, mapply('[', V, T, g))\n\nidentical(V.comb, V3)\n[1] TRUE\n\n",
"After some helpful refactoring guidance from @onyambu, here is a shorter solution based on base::merge():\nmerge(t(combn(1:5, 3)),t(combn(6:11, 2)),by.x=NULL,by.y = NULL)\n\n...and the first 20 rows of output:\n> merge(t(combn(1:5, 3)),t(combn(6:11, 2)),by.x=NULL,by.y = NULL)\n V1.x V2.x V3 V1.y V2.y\n1 1 2 3 6 7\n2 1 2 4 6 7\n3 1 2 5 6 7\n4 1 3 4 6 7\n5 1 3 5 6 7\n6 1 4 5 6 7\n7 2 3 4 6 7\n8 2 3 5 6 7\n9 2 4 5 6 7\n10 3 4 5 6 7\n11 1 2 3 6 8\n12 1 2 4 6 8\n13 1 2 5 6 8\n14 1 3 4 6 8\n15 1 3 5 6 8\n16 1 4 5 6 8\n17 2 3 4 6 8\n18 2 3 5 6 8\n19 2 4 5 6 8\n20 3 4 5 6 8\n\noriginal solution\nA base R solution to create a Cartesian product with merge() looks like this:\ndf1 <- data.frame(t(combn(1:5, 3)))\ndf2 <- data.frame(t(combn(6:11, 2)))\ncolnames(df2) <- paste(\"y\",1:2,sep=\"\"))\n\nmerge(df1,df2,by.x=NULL,by.y = NULL)\n\n...and the first 25 rows of output:\n> merge(df1,df2,by.x=NULL,by.y = NULL)\n X1 X2 X3 y1 y2\n1 1 2 3 6 7\n2 1 2 4 6 7\n3 1 2 5 6 7\n4 1 3 4 6 7\n5 1 3 5 6 7\n6 1 4 5 6 7\n7 2 3 4 6 7\n8 2 3 5 6 7\n9 2 4 5 6 7\n10 3 4 5 6 7\n11 1 2 3 6 8\n12 1 2 4 6 8\n13 1 2 5 6 8\n14 1 3 4 6 8\n15 1 3 5 6 8\n16 1 4 5 6 8\n17 2 3 4 6 8\n18 2 3 5 6 8\n19 2 4 5 6 8\n20 3 4 5 6 8\n21 1 2 3 6 9\n22 1 2 4 6 9\n23 1 2 5 6 9\n24 1 3 4 6 9\n25 1 3 5 6 9\n\n",
"Similar idea, using apply\napply(expand.grid(seq(ncol(V1)), seq(ncol(V2))), 1, function(i) {\n c(V1[,i[1]], V2[,i[2]])})\n#> [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10] [,11] [,12] [,13] [,14]\n#> [1,] 1 1 1 1 1 1 2 2 2 3 1 1 1 1\n#> [2,] 2 2 2 3 3 4 3 3 4 4 2 2 2 3\n#> [3,] 3 4 5 4 5 5 4 5 5 5 3 4 5 4\n#> [4,] 6 6 6 6 6 6 6 6 6 6 6 6 6 6\n#> [5,] 7 7 7 7 7 7 7 7 7 7 8 8 8 8\n#> [,15] [,16] [,17] [,18] [,19] [,20] [,21] [,22] [,23] [,24] [,25] [,26]\n#> [1,] 1 1 2 2 2 3 1 1 1 1 1 1\n#> [2,] 3 4 3 3 4 4 2 2 2 3 3 4\n#> [3,] 5 5 4 5 5 5 3 4 5 4 5 5\n#> [4,] 6 6 6 6 6 6 6 6 6 6 6 6\n#> [5,] 8 8 8 8 8 8 9 9 9 9 9 9\n#> [,27] [,28] [,29] [,30] [,31] [,32] [,33] [,34] [,35] [,36] [,37] [,38]\n#> [1,] 2 2 2 3 1 1 1 1 1 1 2 2\n#> [2,] 3 3 4 4 2 2 2 3 3 4 3 3\n#> [3,] 4 5 5 5 3 4 5 4 5 5 4 5\n#> [4,] 6 6 6 6 6 6 6 6 6 6 6 6\n#> [5,] 9 9 9 9 10 10 10 10 10 10 10 10\n#> [,39] [,40] [,41] [,42] [,43] [,44] [,45] [,46] [,47] [,48] [,49] [,50]\n#> [1,] 2 3 1 1 1 1 1 1 2 2 2 3\n#> [2,] 4 4 2 2 2 3 3 4 3 3 4 4\n#> [3,] 5 5 3 4 5 4 5 5 4 5 5 5\n#> [4,] 6 6 6 6 6 6 6 6 6 6 6 6\n#> [5,] 10 10 11 11 11 11 11 11 11 11 11 11\n#> [,51] [,52] [,53] [,54] [,55] [,56] [,57] [,58] [,59] [,60] [,61] [,62]\n#> [1,] 1 1 1 1 1 1 2 2 2 3 1 1\n#> [2,] 2 2 2 3 3 4 3 3 4 4 2 2\n#> [3,] 3 4 5 4 5 5 4 5 5 5 3 4\n#> [4,] 7 7 7 7 7 7 7 7 7 7 7 7\n#> [5,] 8 8 8 8 8 8 8 8 8 8 9 9\n#> [,63] [,64] [,65] [,66] [,67] [,68] [,69] [,70] [,71] [,72] [,73] [,74]\n#> [1,] 1 1 1 1 2 2 2 3 1 1 1 1\n#> [2,] 2 3 3 4 3 3 4 4 2 2 2 3\n#> [3,] 5 4 5 5 4 5 5 5 3 4 5 4\n#> [4,] 7 7 7 7 7 7 7 7 7 7 7 7\n#> [5,] 9 9 9 9 9 9 9 9 10 10 10 10\n#> [,75] [,76] [,77] [,78] [,79] [,80] [,81] [,82] [,83] [,84] [,85] [,86]\n#> [1,] 1 1 2 2 2 3 1 1 1 1 1 1\n#> [2,] 3 4 3 3 4 4 2 2 2 3 3 4\n#> [3,] 5 5 4 5 5 5 3 4 5 4 5 5\n#> [4,] 7 7 7 7 7 7 7 7 7 7 7 7\n#> [5,] 10 10 10 10 10 10 11 11 11 11 11 11\n#> [,87] [,88] [,89] [,90] [,91] [,92] [,93] [,94] [,95] [,96] [,97] [,98]\n#> [1,] 2 2 2 3 1 1 1 1 1 1 2 2\n#> [2,] 3 3 4 4 2 2 2 3 3 4 3 3\n#> [3,] 4 5 5 5 3 4 5 4 5 5 4 5\n#> [4,] 7 7 7 7 8 8 8 8 8 8 8 8\n#> [5,] 11 11 11 11 9 9 9 9 9 9 9 9\n#> [,99] [,100] [,101] [,102] [,103] [,104] [,105] [,106] [,107] [,108]\n#> [1,] 2 3 1 1 1 1 1 1 2 2\n#> [2,] 4 4 2 2 2 3 3 4 3 3\n#> [3,] 5 5 3 4 5 4 5 5 4 5\n#> [4,] 8 8 8 8 8 8 8 8 8 8\n#> [5,] 9 9 10 10 10 10 10 10 10 10\n#> [,109] [,110] [,111] [,112] [,113] [,114] [,115] [,116] [,117] [,118]\n#> [1,] 2 3 1 1 1 1 1 1 2 2\n#> [2,] 4 4 2 2 2 3 3 4 3 3\n#> [3,] 5 5 3 4 5 4 5 5 4 5\n#> [4,] 8 8 8 8 8 8 8 8 8 8\n#> [5,] 10 10 11 11 11 11 11 11 11 11\n#> [,119] [,120] [,121] [,122] [,123] [,124] [,125] [,126] [,127] [,128]\n#> [1,] 2 3 1 1 1 1 1 1 2 2\n#> [2,] 4 4 2 2 2 3 3 4 3 3\n#> [3,] 5 5 3 4 5 4 5 5 4 5\n#> [4,] 8 8 9 9 9 9 9 9 9 9\n#> [5,] 11 11 10 10 10 10 10 10 10 10\n#> [,129] [,130] [,131] [,132] [,133] [,134] [,135] [,136] [,137] [,138]\n#> [1,] 2 3 1 1 1 1 1 1 2 2\n#> [2,] 4 4 2 2 2 3 3 4 3 3\n#> [3,] 5 5 3 4 5 4 5 5 4 5\n#> [4,] 9 9 9 9 9 9 9 9 9 9\n#> [5,] 10 10 11 11 11 11 11 11 11 11\n#> [,139] [,140] [,141] [,142] [,143] [,144] [,145] [,146] [,147] [,148]\n#> [1,] 2 3 1 1 1 1 1 1 2 2\n#> [2,] 4 4 2 2 2 3 3 4 3 3\n#> [3,] 5 5 3 4 5 4 5 5 4 5\n#> [4,] 9 9 10 10 10 10 10 10 10 10\n#> [5,] 11 11 11 11 11 11 11 11 11 11\n#> [,149] [,150]\n#> [1,] 2 3\n#> [2,] 4 4\n#> [3,] 5 5\n#> [4,] 10 10\n#> [5,] 11 11\n\nCreated on 2022-12-02 with reprex v2.0.2\n"
] |
[
5,
5,
3,
3,
2
] |
[] |
[] |
[
"combinations",
"r"
] |
stackoverflow_0074661523_combinations_r.txt
|
Q:
OOP Error Row's children must not contain any null values, but a null value was found at index 0
I need to move the scorekeeper from main.dart to the questionbank.dart, however, this error keeps showing up and I don't know where and how am I going to fix it. OOP concept must be applied but I am quite confused where to start and how am I going to do it. I get too overwhelmed.
import 'package:flutter/material.dart';
import 'questionbank.dart';
import 'package:rflutter_alert/rflutter_alert.dart';
QuestionBank qb = new QuestionBank();
void main() => runApp(Quizzler());
class Quizzler extends StatelessWidget {
@override
Widget build(BuildContext context) {
return MaterialApp(
home: Scaffold(
backgroundColor: Colors.grey.shade900,
body: SafeArea(
child: Padding(
padding: EdgeInsets.symmetric(horizontal: 10.0),
child: QuizPage(),
),
),
),
);
}
}
class QuizPage extends StatefulWidget {
@override
_QuizPageState createState() => _QuizPageState();
}
class _QuizPageState extends State<QuizPage> {
void checkAnswer (bool A){
setState(() {
if(qb.isFinished()==true){
Alert(
context: context,
title: 'End of Questions',
desc: 'It will start again.',
style: const AlertStyle(
titleStyle: TextStyle(fontWeight: FontWeight.bold),
descStyle: TextStyle(fontSize: 15),
),
buttons: [
DialogButton(
child: Text(
"OKAY",
style: TextStyle(color: Colors.white, fontSize: 20),
),
onPressed: () => Navigator.pop(context),
width: 120,
)
],
).show();
qb.reset();
qb.scoreKeeper = [];
} else{
if(qb.getSagot()==A){
qb.scoreKeeper.add(
qb.skcorrect()
);
}
else{
qb.scoreKeeper.add(
qb.skwrong(),
);
}
qb.next();
}
});
}
@override
Widget build(BuildContext context) {
return Column(
mainAxisAlignment: MainAxisAlignment.spaceBetween,
crossAxisAlignment: CrossAxisAlignment.stretch,
children: <Widget>[
Expanded(
flex: 5,
child: Padding(
padding: EdgeInsets.all(10.0),
child: Center(
child: Text(
qb.getTanong(),
textAlign: TextAlign.center,
style: TextStyle(
fontSize: 25.0,
color: Colors.white,
),
),
),
),
),
Expanded(
child: Padding(
padding: EdgeInsets.all(15.0),
child: Container(
color: Colors.green,
child: TextButton(
child: Text(
'True',
style: TextStyle(
color: Colors.white,
fontSize: 20.0,
),
),
onPressed: () {
checkAnswer(true);
},
),
),
),
),
Expanded(
child: Padding(
padding: EdgeInsets.all(15.0),
child: Container(
color: Colors.red,
child: TextButton(
child: Text(
'False',
style: TextStyle(
fontSize: 20.0,
color: Colors.white,
),
),
onPressed: () {
checkAnswer(false);
},
),
),
),
),
Row(
children:
qb.scoreKeeper,
),
],
);
}
}
import 'package:flutter/material.dart';
import 'question.dart';
class QuestionBank{
int item = 0;
List<Question> questions = [
Question('You can lead a cow down stairs but not up stairs.', false),
Question('Approximately one quarter of human bones are in the feet.', true),
Question('A slug\'s blood is green.', true),
Question('Buzz Aldrin\'s mother\'s maiden name was \"Moon\".', true),
Question('It is illegal to pee in the Ocean in Portugal.', true),
Question('No piece of square dry paper can be folded in half more than 7 times.', false),
Question('In London, UK, if you happen to die in the House of Parliament, you are technically '
'entitled to a state funeral, because the building is considered too sacred a place.', true),
Question('The loudest sound produced by any animal is 188 decibels. '
'That animal is the African Elephant.', false),
Question('The total surface area of two human lungs is approximately 70 square metres.', true),
Question('Google was originally called \"Backrub\".', true),
Question('Chocolate affects a dog\'s heart and nervous system; a few ounces are enough to '
'kill a small dog.', true),
Question('In West Virginia, USA, if you accidentally hit an animal with your car, '
'you are free to take it home to eat.', true),
];
String getTanong(){
return questions[item].tanong;
}
bool getSagot(){
return questions[item].sagot;
}
void next(){
if (item<11){
item++;
}
}
bool isFinished(){
if(item >= 11){
return true;
}
else{
return false;
}
}
void reset(){
if(isFinished()){
item = 0;
}
}
List<Icon> scoreKeeper = [];
skcorrect(){
Icon(
Icons.check,
color: Colors.green ,
);
}
skwrong(){
Icon(
Icons.close,
color: Colors.red,
);
}
}
I have tried using void, creating another scorekeeper on the maindart but nothing changed still.
A:
you are missing return in skcorrect and skwrong functions
Widget skwrong(){
return Icon(
Icons.close,
color: Colors.red,
);
}
basically you are adding null here
qb.scoreKeeper.add(
qb.skcorrect()
);
|
OOP Error Row's children must not contain any null values, but a null value was found at index 0
|
I need to move the scorekeeper from main.dart to the questionbank.dart, however, this error keeps showing up and I don't know where and how am I going to fix it. OOP concept must be applied but I am quite confused where to start and how am I going to do it. I get too overwhelmed.
import 'package:flutter/material.dart';
import 'questionbank.dart';
import 'package:rflutter_alert/rflutter_alert.dart';
QuestionBank qb = new QuestionBank();
void main() => runApp(Quizzler());
class Quizzler extends StatelessWidget {
@override
Widget build(BuildContext context) {
return MaterialApp(
home: Scaffold(
backgroundColor: Colors.grey.shade900,
body: SafeArea(
child: Padding(
padding: EdgeInsets.symmetric(horizontal: 10.0),
child: QuizPage(),
),
),
),
);
}
}
class QuizPage extends StatefulWidget {
@override
_QuizPageState createState() => _QuizPageState();
}
class _QuizPageState extends State<QuizPage> {
void checkAnswer (bool A){
setState(() {
if(qb.isFinished()==true){
Alert(
context: context,
title: 'End of Questions',
desc: 'It will start again.',
style: const AlertStyle(
titleStyle: TextStyle(fontWeight: FontWeight.bold),
descStyle: TextStyle(fontSize: 15),
),
buttons: [
DialogButton(
child: Text(
"OKAY",
style: TextStyle(color: Colors.white, fontSize: 20),
),
onPressed: () => Navigator.pop(context),
width: 120,
)
],
).show();
qb.reset();
qb.scoreKeeper = [];
} else{
if(qb.getSagot()==A){
qb.scoreKeeper.add(
qb.skcorrect()
);
}
else{
qb.scoreKeeper.add(
qb.skwrong(),
);
}
qb.next();
}
});
}
@override
Widget build(BuildContext context) {
return Column(
mainAxisAlignment: MainAxisAlignment.spaceBetween,
crossAxisAlignment: CrossAxisAlignment.stretch,
children: <Widget>[
Expanded(
flex: 5,
child: Padding(
padding: EdgeInsets.all(10.0),
child: Center(
child: Text(
qb.getTanong(),
textAlign: TextAlign.center,
style: TextStyle(
fontSize: 25.0,
color: Colors.white,
),
),
),
),
),
Expanded(
child: Padding(
padding: EdgeInsets.all(15.0),
child: Container(
color: Colors.green,
child: TextButton(
child: Text(
'True',
style: TextStyle(
color: Colors.white,
fontSize: 20.0,
),
),
onPressed: () {
checkAnswer(true);
},
),
),
),
),
Expanded(
child: Padding(
padding: EdgeInsets.all(15.0),
child: Container(
color: Colors.red,
child: TextButton(
child: Text(
'False',
style: TextStyle(
fontSize: 20.0,
color: Colors.white,
),
),
onPressed: () {
checkAnswer(false);
},
),
),
),
),
Row(
children:
qb.scoreKeeper,
),
],
);
}
}
import 'package:flutter/material.dart';
import 'question.dart';
class QuestionBank{
int item = 0;
List<Question> questions = [
Question('You can lead a cow down stairs but not up stairs.', false),
Question('Approximately one quarter of human bones are in the feet.', true),
Question('A slug\'s blood is green.', true),
Question('Buzz Aldrin\'s mother\'s maiden name was \"Moon\".', true),
Question('It is illegal to pee in the Ocean in Portugal.', true),
Question('No piece of square dry paper can be folded in half more than 7 times.', false),
Question('In London, UK, if you happen to die in the House of Parliament, you are technically '
'entitled to a state funeral, because the building is considered too sacred a place.', true),
Question('The loudest sound produced by any animal is 188 decibels. '
'That animal is the African Elephant.', false),
Question('The total surface area of two human lungs is approximately 70 square metres.', true),
Question('Google was originally called \"Backrub\".', true),
Question('Chocolate affects a dog\'s heart and nervous system; a few ounces are enough to '
'kill a small dog.', true),
Question('In West Virginia, USA, if you accidentally hit an animal with your car, '
'you are free to take it home to eat.', true),
];
String getTanong(){
return questions[item].tanong;
}
bool getSagot(){
return questions[item].sagot;
}
void next(){
if (item<11){
item++;
}
}
bool isFinished(){
if(item >= 11){
return true;
}
else{
return false;
}
}
void reset(){
if(isFinished()){
item = 0;
}
}
List<Icon> scoreKeeper = [];
skcorrect(){
Icon(
Icons.check,
color: Colors.green ,
);
}
skwrong(){
Icon(
Icons.close,
color: Colors.red,
);
}
}
I have tried using void, creating another scorekeeper on the maindart but nothing changed still.
|
[
"you are missing return in skcorrect and skwrong functions\nWidget skwrong(){\n return Icon(\n Icons.close,\n color: Colors.red,\n );\n}\n\nbasically you are adding null here\nqb.scoreKeeper.add(\n qb.skcorrect()\n );\n\n"
] |
[
0
] |
[] |
[] |
[
"android",
"dart",
"flutter",
"oop"
] |
stackoverflow_0074652345_android_dart_flutter_oop.txt
|
Q:
Create Spring Data Aggregation Query with Projection of Nested Array
Here is how my document looks like:
{
"_id" : ObjectId("583cb6bcce047d1e68339b64"),
"variantDetails" : [
{
"variants" : {
"_" : "_"
},
"sku" : "069563-59690"
},
{
"variants" : {
"size" : "35"
},
"sku" : "069563-59690-35",
"barcode" : "809702246941"
},
{
"variants" : {
"size" : "36"
},
"sku" : "069563-59690-36",
"barcode" : "809702246958"
}
......
] }
And I would like to use a complex aggregation query like this:
db.getCollection('product').aggregate([
{ '$match': { 'variantDetails.sku': { '$in': ['069563-59690', '069563-59690-36', '069563-59690-37', '511534-01001'] } } },
{ '$project': {'_id': 1, 'variantDetails': 1, 'variantLength': { '$size': '$variantDetails' } } },
{ '$unwind': '$variantDetails' },
{ '$match': { 'variantDetails.sku': { '$in': ['069563-59690', '069563-59690-36', '069563-59690-37', '511534-01001'] } } },
{ '$match': { '$or': [
{'variantLength': { '$ne': 1 }, 'variantDetails.variants._': { '$ne': '_' } },
{'variantLength': 1 }
] } },
{ '$group': { '_id': '$_id', 'variantDetails': { '$push': '$variantDetails' } } },
{ '$project': {'_id': 1, 'variantDetails.sku': 1, 'variantDetails.barcode': 1} }
])
And here is my java code:
final Aggregation agg = Aggregation.newAggregation(
Aggregation.match(Criteria.where("variantDetails.sku").in(skus)),
Aggregation.project("_id", "variantDetails").and("variantDetails").project("size").as("variantLength"),
Aggregation.unwind("variantDetails"),
Aggregation.match(Criteria.where("variantDetails.sku").in(skus)),
Aggregation.match(new Criteria().orOperator(Criteria.where("variantLength").is(1), Criteria.where("variantLength").ne(1).and("variantDetails.variants._").is("_"))),
Aggregation.group("_id").push("variantDetails").as("variantDetails"),
Aggregation.project("_id", "variantDetails.sku", "variantDetails.barcode")
);
final AggregationResults<Product> result = this.mongo.aggregate(agg, this.mongo.getCollectionName(Product.class), Product.class);
return result.getMappedResults();
The problem is that spring translate
Aggregation.project("_id", "variantDetails.sku", "variantDetails.barcode")
To
{ "$project" : { "_id" : 1 , "sku" : "$variantDetails.sku" , "barcode" : "$variantDetails.barcode"}
But I'm expecting
{ '$project': {'_id': 1, 'variantDetails.sku': 1, 'variantDetails.barcode': 1} }
Could someone let me know how to make it right?
A:
I had the same issue and this way works:
Aggregation.project("_id")
.andExpression("variantDetails.sku").as("variantDetails.sku")
.andExpression("variantDetails.barcode").as("variantDetails.barcode"));
The projection will be:
{'$project': {'_id': 1, 'variantDetails.sku': '$variantDetails.sku',
'variantDetails.barcode': '$variantDetails.barcode'} }
A:
You just need to specify the label as alias in the projection operation as the default that spring provides doesnt match. Use Spring 1.8.5 version
Aggregation.project("_id")
.and(context -> new BasicDBObject("$arrayElemAt", Arrays.asList("variantDetails.sku", 0))).as("variantDetails.sku")
.and(context -> new BasicDBObject("$arrayElemAt", Arrays.asList("variantDetails.barcode", 0))).as("variantDetails.barcode"));
A:
May be an old question, but I faced the same issue pointed by Sean.
If found that if you want the expected result
{ '$project': {'_id': 1, 'variantDetails.sku': 1, 'variantDetails.barcode': 1} }
a solution can be:
Aggregation.project("_id")
.andExpression("1").as("variantDetails.sku")
.andExpression("1").as("variantDetails.barcode")
Thanks to Virginia León for showing the way. You make my day!
|
Create Spring Data Aggregation Query with Projection of Nested Array
|
Here is how my document looks like:
{
"_id" : ObjectId("583cb6bcce047d1e68339b64"),
"variantDetails" : [
{
"variants" : {
"_" : "_"
},
"sku" : "069563-59690"
},
{
"variants" : {
"size" : "35"
},
"sku" : "069563-59690-35",
"barcode" : "809702246941"
},
{
"variants" : {
"size" : "36"
},
"sku" : "069563-59690-36",
"barcode" : "809702246958"
}
......
] }
And I would like to use a complex aggregation query like this:
db.getCollection('product').aggregate([
{ '$match': { 'variantDetails.sku': { '$in': ['069563-59690', '069563-59690-36', '069563-59690-37', '511534-01001'] } } },
{ '$project': {'_id': 1, 'variantDetails': 1, 'variantLength': { '$size': '$variantDetails' } } },
{ '$unwind': '$variantDetails' },
{ '$match': { 'variantDetails.sku': { '$in': ['069563-59690', '069563-59690-36', '069563-59690-37', '511534-01001'] } } },
{ '$match': { '$or': [
{'variantLength': { '$ne': 1 }, 'variantDetails.variants._': { '$ne': '_' } },
{'variantLength': 1 }
] } },
{ '$group': { '_id': '$_id', 'variantDetails': { '$push': '$variantDetails' } } },
{ '$project': {'_id': 1, 'variantDetails.sku': 1, 'variantDetails.barcode': 1} }
])
And here is my java code:
final Aggregation agg = Aggregation.newAggregation(
Aggregation.match(Criteria.where("variantDetails.sku").in(skus)),
Aggregation.project("_id", "variantDetails").and("variantDetails").project("size").as("variantLength"),
Aggregation.unwind("variantDetails"),
Aggregation.match(Criteria.where("variantDetails.sku").in(skus)),
Aggregation.match(new Criteria().orOperator(Criteria.where("variantLength").is(1), Criteria.where("variantLength").ne(1).and("variantDetails.variants._").is("_"))),
Aggregation.group("_id").push("variantDetails").as("variantDetails"),
Aggregation.project("_id", "variantDetails.sku", "variantDetails.barcode")
);
final AggregationResults<Product> result = this.mongo.aggregate(agg, this.mongo.getCollectionName(Product.class), Product.class);
return result.getMappedResults();
The problem is that spring translate
Aggregation.project("_id", "variantDetails.sku", "variantDetails.barcode")
To
{ "$project" : { "_id" : 1 , "sku" : "$variantDetails.sku" , "barcode" : "$variantDetails.barcode"}
But I'm expecting
{ '$project': {'_id': 1, 'variantDetails.sku': 1, 'variantDetails.barcode': 1} }
Could someone let me know how to make it right?
|
[
"I had the same issue and this way works:\nAggregation.project(\"_id\")\n.andExpression(\"variantDetails.sku\").as(\"variantDetails.sku\") \n.andExpression(\"variantDetails.barcode\").as(\"variantDetails.barcode\"));\n\nThe projection will be:\n{'$project': {'_id': 1, 'variantDetails.sku': '$variantDetails.sku', \n'variantDetails.barcode': '$variantDetails.barcode'} }\n\n",
"You just need to specify the label as alias in the projection operation as the default that spring provides doesnt match. Use Spring 1.8.5 version\nAggregation.project(\"_id\")\n .and(context -> new BasicDBObject(\"$arrayElemAt\", Arrays.asList(\"variantDetails.sku\", 0))).as(\"variantDetails.sku\")\n .and(context -> new BasicDBObject(\"$arrayElemAt\", Arrays.asList(\"variantDetails.barcode\", 0))).as(\"variantDetails.barcode\"));\n\n",
"May be an old question, but I faced the same issue pointed by Sean.\nIf found that if you want the expected result\n{ '$project': {'_id': 1, 'variantDetails.sku': 1, 'variantDetails.barcode': 1} }\n\na solution can be:\nAggregation.project(\"_id\")\n .andExpression(\"1\").as(\"variantDetails.sku\")\n .andExpression(\"1\").as(\"variantDetails.barcode\")\n\nThanks to Virginia León for showing the way. You make my day!\n"
] |
[
1,
0,
0
] |
[] |
[] |
[
"mongodb",
"spring",
"spring_data_mongodb"
] |
stackoverflow_0040915681_mongodb_spring_spring_data_mongodb.txt
|
Q:
Finding all ocurrences from determinate word and exctracting the next word in bash
I have a .txt file where the word 'picture:' is found multiple times in the file. How can I extract all words after the 'pictures:' word and save in a text file
I tried the follow code,but doesn't work:
cat users_sl.txt |awk -F: '/^login:"/{print $2}' cookies.txt
user_sl.txt:
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Quis picture lobortis scelerisque fermentum dui faucibus in ornare quam. Est ullamcorper eget nulla facilisi etiam dignissim diam quis. Quis viverra nibh cras pulvinar mattis nunc sed. Turpis massa sed elementum picture tempus egestas. Condimentum vitae sapien pellentesque habitant. Et molestie ac feugiat sed lectus vestibulum mattis ullamcorper. Tincidunt lobortis feugiat vivamus at augue eget arcu picture dictum varius. Donec massa sapien faucibus et molestie ac feugiat sed. Tincidunt eget nullam non nisi est. Ornare arcu dui vivamus arcu. Mattis enim ut tellus elementum sagittis vitae et leo duis
picturelist.txt:
lobortis
dictum
tempus
A:
Well, I'm assuming you actually just have picture instead of **picture:**, and that you may need to deal with line breaks, so...
$ cat sl.txt
Lorem ipsum dolor sit amet, consectetur adipiscing elit,
sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
Quis picture lobortis scelerisque fermentum dui faucibus in ornare quam.
Est ullamcorper eget nulla facilisi etiam dignissim diam quis.
Quis viverra nibh cras pulvinar mattis nunc sed.
Turpis massa sed elementum picture tempus egestas.
Condimentum vitae sapien pellentesque habitant.
Et molestie ac feugiat sed lectus vestibulum mattis ullamcorper.
Tincidunt lobortis feugiat vivamus at augue eget arcu picture
dictum varius. Donec massa sapien faucibus et molestie ac feugiat sed.
Tincidunt eget nullam non nisi est.
Ornare arcu dui vivamus arcu.
Mattis enim ut tellus elementum sagittis vitae et leo duis
$ cat sl.txt | tr '\n' ' ' | grep -o 'picture [^ ]*' | cut -d' ' -f2
lobortis
tempus
dictum
Edit: Explanation:
tr '\n' ' ' replaces every (unix) line break with a space -- makes the whole thing one line.
The -o flag tells grep to return only the matched string. The search pattern starts with picture and a space picture , and then everything that follows that is not a space: [^ ]*.
Finally cut using the space character for a delimiter -d ' ' prints the second field: -f 2
A:
Here is a bash solution with a clean shellcheck. Tested with bash version 5.2.2 on a MacOS Ventura system.
#!/usr/bin/env bash
IFS=" " read -r -a WORDS <<< "$(tr '\n' ' ' < users_sl.txt)"
echo processing ${#WORDS[@]} words
for (( i=0; i < ${#WORDS[@]}; i++ ))
do
if [ "${WORDS[$i]}" = "picture" ]; then
echo "${WORDS[i+1]}"
fi
done | tee picturelist.txt
A:
The code you provided seems to be incorrect. The awk command should be used to search for the word 'picture:' in the file and print the word that comes after it. Here is an example of how you could do that:
awk '/picture:/{getline; print}' users_sl.txt > output.txt
This command will search for the pattern 'picture:' in the file users_sl.txt, then get the next line and print it to a file called output.txt.
Here is a breakdown of the command:
awk: This is the command to run the awk program.
/picture:/: This is the pattern that awk will search for in the input file. In this case, we are searching for the word 'picture:'.
getline: This is an awk
function that gets the next line from the input file.
print: This is
an awk function that prints the current line to the output file.
A:
With perl:
$ perl -nE 'say for /\bpicture\b\s+(\w+)\b/g' user_sl.txt | tee picturelist.txt
lobortis
tempus
dictum
A:
With awk:
$ awk '{
for (i=1; i<=NF; i++) {
if ($i == "picture") print $(i+1)
}
}' user_sl.txt | tee picturelist.txt
or
$ printf '%s\n' $(< users_sl.txt) |
awk '/picture/{p=1;next} {if (p==1) {print;p=0}}' > picturelist.txt
lobortis
tempus
dictum
A:
With bash:
#!/bin/bash
arr=( $(<user_sl.txt) )
for ((i=0; i<${#arr[@]}; i++)); do
if [[ ${arr[i]} == picture ]]; then
printf '%s\n' "${arr[i+1]}"
fi
done | tee picturelist.txt
Output
lobortis
tempus
dictum
|
Finding all ocurrences from determinate word and exctracting the next word in bash
|
I have a .txt file where the word 'picture:' is found multiple times in the file. How can I extract all words after the 'pictures:' word and save in a text file
I tried the follow code,but doesn't work:
cat users_sl.txt |awk -F: '/^login:"/{print $2}' cookies.txt
user_sl.txt:
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Quis picture lobortis scelerisque fermentum dui faucibus in ornare quam. Est ullamcorper eget nulla facilisi etiam dignissim diam quis. Quis viverra nibh cras pulvinar mattis nunc sed. Turpis massa sed elementum picture tempus egestas. Condimentum vitae sapien pellentesque habitant. Et molestie ac feugiat sed lectus vestibulum mattis ullamcorper. Tincidunt lobortis feugiat vivamus at augue eget arcu picture dictum varius. Donec massa sapien faucibus et molestie ac feugiat sed. Tincidunt eget nullam non nisi est. Ornare arcu dui vivamus arcu. Mattis enim ut tellus elementum sagittis vitae et leo duis
picturelist.txt:
lobortis
dictum
tempus
|
[
"Well, I'm assuming you actually just have picture instead of **picture:**, and that you may need to deal with line breaks, so...\n$ cat sl.txt \nLorem ipsum dolor sit amet, consectetur adipiscing elit,\nsed do eiusmod tempor incididunt ut labore et dolore magna aliqua.\nQuis picture lobortis scelerisque fermentum dui faucibus in ornare quam.\nEst ullamcorper eget nulla facilisi etiam dignissim diam quis.\nQuis viverra nibh cras pulvinar mattis nunc sed.\nTurpis massa sed elementum picture tempus egestas.\nCondimentum vitae sapien pellentesque habitant.\nEt molestie ac feugiat sed lectus vestibulum mattis ullamcorper.\nTincidunt lobortis feugiat vivamus at augue eget arcu picture\ndictum varius. Donec massa sapien faucibus et molestie ac feugiat sed.\nTincidunt eget nullam non nisi est.\nOrnare arcu dui vivamus arcu.\nMattis enim ut tellus elementum sagittis vitae et leo duis\n\n$ cat sl.txt | tr '\\n' ' ' | grep -o 'picture [^ ]*' | cut -d' ' -f2\nlobortis\ntempus\ndictum\n\nEdit: Explanation:\ntr '\\n' ' ' replaces every (unix) line break with a space -- makes the whole thing one line.\nThe -o flag tells grep to return only the matched string. The search pattern starts with picture and a space picture , and then everything that follows that is not a space: [^ ]*.\nFinally cut using the space character for a delimiter -d ' ' prints the second field: -f 2\n",
"Here is a bash solution with a clean shellcheck. Tested with bash version 5.2.2 on a MacOS Ventura system.\n#!/usr/bin/env bash\n\nIFS=\" \" read -r -a WORDS <<< \"$(tr '\\n' ' ' < users_sl.txt)\"\n\necho processing ${#WORDS[@]} words\n\nfor (( i=0; i < ${#WORDS[@]}; i++ ))\ndo\n if [ \"${WORDS[$i]}\" = \"picture\" ]; then\n echo \"${WORDS[i+1]}\"\n fi\ndone | tee picturelist.txt\n\n",
"The code you provided seems to be incorrect. The awk command should be used to search for the word 'picture:' in the file and print the word that comes after it. Here is an example of how you could do that:\nawk '/picture:/{getline; print}' users_sl.txt > output.txt\n\nThis command will search for the pattern 'picture:' in the file users_sl.txt, then get the next line and print it to a file called output.txt.\nHere is a breakdown of the command:\n\nawk: This is the command to run the awk program.\n/picture:/: This is the pattern that awk will search for in the input file. In this case, we are searching for the word 'picture:'.\ngetline: This is an awk\nfunction that gets the next line from the input file.\nprint: This is\nan awk function that prints the current line to the output file.\n\n",
"With perl:\n$ perl -nE 'say for /\\bpicture\\b\\s+(\\w+)\\b/g' user_sl.txt | tee picturelist.txt\nlobortis\ntempus\ndictum\n\n",
"With awk:\n$ awk '{\n for (i=1; i<=NF; i++) {\n if ($i == \"picture\") print $(i+1)\n }\n}' user_sl.txt | tee picturelist.txt\n\nor\n$ printf '%s\\n' $(< users_sl.txt) |\n awk '/picture/{p=1;next} {if (p==1) {print;p=0}}' > picturelist.txt\n\nlobortis\ntempus\ndictum\n\n",
"With bash:\n#!/bin/bash\n\narr=( $(<user_sl.txt) )\nfor ((i=0; i<${#arr[@]}; i++)); do\n if [[ ${arr[i]} == picture ]]; then\n printf '%s\\n' \"${arr[i+1]}\"\n fi\ndone | tee picturelist.txt\n\nOutput\nlobortis\ntempus\ndictum\n\n"
] |
[
1,
1,
0,
0,
0,
0
] |
[
"You put the data in textfile.\nRun in bash.\ncat textfile | \\\n grep -o 'picture:\\*\\*[^ ]*' | \\\n sed 's/.*\\*\\(.*\\)/\\1/g';\n\n",
"The code you provided is not working because there are some syntax errors in it. The correct syntax to extract all words after the 'pictures:' word and save it in a text file is:\ncat users_sl.txt | awk -F \"picture:\" '{print $2}' > picturelist.txt\n\nThe above command will extract all words after 'picture:' in the input file and save it in a file named 'picturelist.txt'.\nHere's how the command works:\n\ncat users_sl.txt - This prints the content of the file 'users_sl.txt' to the standard output.\n\nawk -F \"picture:\" '{print $2}' - This is the core part of the command. It uses the awk command to extract the words after 'picture:'. The -F option is used to specify the delimiter (in this case, it is 'picture:') and $2 is used to extract the second field (which is the words after 'picture:').\n\n> picturelist.txt - This redirects the output of the awk command to a file named 'picturelist.txt'.\n\n\nI hope this helps. Let me know if you have any questions.\n"
] |
[
-1,
-1
] |
[
"bash",
"shell",
"unix"
] |
stackoverflow_0074646404_bash_shell_unix.txt
|
Q:
Insert rows in Python dataframe with conditions
I have a large data file as shown below.
I wanted to add two new columns (E and F) next to column D and move the suite # when applicable and City/State data in cell D3 and D4 to E2 and F2, respectively. The challenge is not every entry has the suite number. I would need to insert a row first for those entries that don't have the suite number, only for them, not for those that already have the suite information.
I know how to do loops, but am having trouble to define the conditions. One way is to count the length of the string. How should I get started? Much appreciate your help!
A:
This is how I would do it. I don't recommend looping when using pandas. There are a lot of tools that it is often not needed. Some caution on this. Your spreadsheet has NaN and I think that is actually numpy np.nan equivalent. You also have blanks I am thinking that it is a "" equivalent.
# dictionary of your data
companies = {
'Comp ID': ['C1', '', np.nan, 'C2', '', np.nan, 'C3',np.nan],
'Address': ['10 foo', 'Suite A','foo city', '11 spam','STE 100','spam town', '12 ham', 'Myhammy'],
'phone': ['888-321-4567', '', np.nan, '888-321-4567', '', np.nan, '888-321-4567',np.nan],
'Type': ['W_sale', '', np.nan, 'W_sale', '', np.nan, 'W_sale',np.nan],
}
# make the frames needed.
df = pd.DataFrame( companies)
df1 = pd.DataFrame() # blank frame for suite and town columns
# Need a where clause it is similar to a if() statement in excel
df1['Suite'] = np.where( df['Comp ID']=='', df['Address'], np.nan)
df1['City/State'] = np.where( df['Comp ID'].isna(), df['Address'], np.nan)
# copy values to rows above
df1 = df1[['Suite','City/State']].backfill()
# joint the frames together on index
df = df.join(df1)
df.drop_duplicates(subset=['City/State'], keep='first', inplace=True)
# set the column order to what you want
df = df[['Comp ID', 'Type', 'Address', 'Suite', 'City/State', 'phone' ]]
output
Comp ID
Type
Address
Suite
City/State
phone
C1
W_sale
10 foo
Suite A
foo city
888-321-4567
C2
W_sale
11 spam
STE 100
spam town
888-321-4567
C3
W_sale
12 ham
Myhammy
888-321-4567
|
Insert rows in Python dataframe with conditions
|
I have a large data file as shown below.
I wanted to add two new columns (E and F) next to column D and move the suite # when applicable and City/State data in cell D3 and D4 to E2 and F2, respectively. The challenge is not every entry has the suite number. I would need to insert a row first for those entries that don't have the suite number, only for them, not for those that already have the suite information.
I know how to do loops, but am having trouble to define the conditions. One way is to count the length of the string. How should I get started? Much appreciate your help!
|
[
"This is how I would do it. I don't recommend looping when using pandas. There are a lot of tools that it is often not needed. Some caution on this. Your spreadsheet has NaN and I think that is actually numpy np.nan equivalent. You also have blanks I am thinking that it is a \"\" equivalent.\n# dictionary of your data\ncompanies = {\n 'Comp ID': ['C1', '', np.nan, 'C2', '', np.nan, 'C3',np.nan],\n 'Address': ['10 foo', 'Suite A','foo city', '11 spam','STE 100','spam town', '12 ham', 'Myhammy'],\n 'phone': ['888-321-4567', '', np.nan, '888-321-4567', '', np.nan, '888-321-4567',np.nan],\n 'Type': ['W_sale', '', np.nan, 'W_sale', '', np.nan, 'W_sale',np.nan],\n}\n# make the frames needed. \ndf = pd.DataFrame( companies)\ndf1 = pd.DataFrame() # blank frame for suite and town columns\n\n# Need a where clause it is similar to a if() statement in excel\ndf1['Suite'] = np.where( df['Comp ID']=='', df['Address'], np.nan)\ndf1['City/State'] = np.where( df['Comp ID'].isna(), df['Address'], np.nan)\n# copy values to rows above\ndf1 = df1[['Suite','City/State']].backfill()\n# joint the frames together on index\ndf = df.join(df1)\ndf.drop_duplicates(subset=['City/State'], keep='first', inplace=True)\n# set the column order to what you want\ndf = df[['Comp ID', 'Type', 'Address', 'Suite', 'City/State', 'phone' ]]\n\noutput\n\n\n\n\nComp ID\nType\nAddress\nSuite\nCity/State\nphone\n\n\n\n\nC1\nW_sale\n10 foo\nSuite A\nfoo city\n888-321-4567\n\n\nC2\nW_sale\n11 spam\nSTE 100\nspam town\n888-321-4567\n\n\nC3\nW_sale\n12 ham\n\nMyhammy\n888-321-4567\n\n\n\n"
] |
[
0
] |
[] |
[] |
[
"conditional_statements",
"dataframe",
"insert",
"pandas",
"python"
] |
stackoverflow_0074661308_conditional_statements_dataframe_insert_pandas_python.txt
|
Q:
How do I pass back a user selection of YES/NO
Right now the current script is the below where someone selects a menu option, and the name is deleted.
I'd like to provide a prompt to confirm this selection. Where yes would delete the name, and they would get their defaultname back and no would just basically mean "cancel"
function deleteSomething()
{
deletename('name')
addDefaultname()
}
function deletename(value) {
PropertyService.getUserproperties().deleteProperty(value)
}
Here's what I've come up:
function deleteSomething()
{ deletename('name')
if('YES')
{ addDefaultName()
}
else {
}
}
function deleteSomething(value){
var result=SpreadsheetApp.getUi().alert("Are you sure you want to delete this name?", SpreadsheetApp.getUi().ButtonSet.YES_NO)
if(result === SpreadsheetApp.getUi().Button.YES) {
}
else
{
}
}
`
I'm not that skilled in programming, but trying to figure out how to pass back the user selection.
A:
function askQuestion() {
let r = SpreadsheetApp.getUi().prompt("Do you wish to continue?",SpreadsheetApp.getUi().ButtonSet.YES_NO);
if(r.getSelectedButton == SpreadsheetApp.getUi().Button.YES) {
//if yes
} else {
// must be no this way
}
}
How do I pass r back to deleteSomething()?
function askQuestion() {
let r = SpreadsheetApp.getUi().prompt("Do you wish to continue?",SpreadsheetApp.getUi().ButtonSet.YES_NO);
return r;
}
function doSomething(r=askQuestion()) {
//doSomething code
}
|
How do I pass back a user selection of YES/NO
|
Right now the current script is the below where someone selects a menu option, and the name is deleted.
I'd like to provide a prompt to confirm this selection. Where yes would delete the name, and they would get their defaultname back and no would just basically mean "cancel"
function deleteSomething()
{
deletename('name')
addDefaultname()
}
function deletename(value) {
PropertyService.getUserproperties().deleteProperty(value)
}
Here's what I've come up:
function deleteSomething()
{ deletename('name')
if('YES')
{ addDefaultName()
}
else {
}
}
function deleteSomething(value){
var result=SpreadsheetApp.getUi().alert("Are you sure you want to delete this name?", SpreadsheetApp.getUi().ButtonSet.YES_NO)
if(result === SpreadsheetApp.getUi().Button.YES) {
}
else
{
}
}
`
I'm not that skilled in programming, but trying to figure out how to pass back the user selection.
|
[
"function askQuestion() {\n let r = SpreadsheetApp.getUi().prompt(\"Do you wish to continue?\",SpreadsheetApp.getUi().ButtonSet.YES_NO);\n if(r.getSelectedButton == SpreadsheetApp.getUi().Button.YES) {\n //if yes\n } else {\n // must be no this way\n }\n}\n\nHow do I pass r back to deleteSomething()?\nfunction askQuestion() {\n let r = SpreadsheetApp.getUi().prompt(\"Do you wish to continue?\",SpreadsheetApp.getUi().ButtonSet.YES_NO);\n return r;\n}\n\nfunction doSomething(r=askQuestion()) {\n //doSomething code\n}\n\n"
] |
[
0
] |
[] |
[] |
[
"google_apps_script"
] |
stackoverflow_0074661948_google_apps_script.txt
|
Q:
Cannot Resume Discord Websocket After being disconnected (InvalidSession)
I'm working with the Discord Websocket, And I'm trying to have my bot able to reconnect After it gets the '7' opcode (Reconnect) or When I disconnect (I use code 4200).
When I try though I always get a InvalidSession Payload (Opcode is 9)
I've tried a few things, First I tried normally in TS how I do it
{
op: 6,
d: {
token: "My Bot Token",
session_id: "My Session ID",
seq: 5,
}
}
This is an example of the JSON I send after reconnecting with the reconnect url. Here is the reconnect code as well
this.ws = new WebSocket(this.resumeUrl);
this.ws.on('message', (data: Buffer) => {
const payload = JSON.parse(data.toString());
if (payload.op == OpCodes.Hello) {
console.log('Sending resume payload');
this.ws.send(this.payloads.resume());
console.log('Sent resume payload');
}
})
Then Using Postman I also try doing it. Both always get the Invalid Session
Here is a complete example of my code
import { WebSocket } from "ws";
const OpCodes = {
Dispatch: 0,
Heartbeat: 1,
Identify: 2,
PresenceUpdate: 3,
VoiceStateUpdate: 4,
Resume: 6,
Reconnect: 7,
RequestGuildMembers: 8,
InvalidSession: 9,
Hello: 10,
HeartbeatAck: 11,
}
const token = ""
let resume_gateway_url = "";
let session_id = "";
let seq = 0;
let heartbeatInterval = null;
const ws = new WebSocket("wss://gateway.discord.gg/?v=10&encoding=json");
console.log('='.repeat(50));
ws.on('message', (data) => {
const payload = JSON.parse(data.toString());
if (payload?.s) seq = payload.s;
if (payload?.op == OpCodes.Hello) {
heartbeatInterval = setInterval(() => {
ws.send(JSON.stringify({
op: OpCodes.Heartbeat,
d: seq
}))
}, payload.d.heartbeat_interval);
}
if (payload.t == "READY") {
resume_gateway_url = payload.d.resume_gateway_url;
session_id = payload.d.session_id;
}
console.log(`Got payload ${Object.keys(OpCodes).find(key => OpCodes[key] === payload.op)} (${payload?.t ?? payload.op})`);
console.log('='.repeat(50));
});
ws.on("open", () => {
ws.send(JSON.stringify({
op: OpCodes.Identify,
d: {
token,
properties: {
os: process.platform,
browser: 'DinkJS',
device: 'DinkJS',
},
intents: 3276799,
}
}))
});
setTimeout(() => {
clearInterval(heartbeatInterval);
console.time('Reconnecting Took');
ws.close(4200, 'Reconnect');
console.log('Closing old WS with code 4200');
console.log('='.repeat(50));
const newWs = new WebSocket(resume_gateway_url);
console.log(`Opening new WS with the resume gateway url of ${resume_gateway_url} (Seq: ${seq})`);
console.log('='.repeat(50));
newWs.on('message', (data) => {
const payload = JSON.parse(data.toString());
if (payload.op == OpCodes.Hello) {
newWs.send(JSON.stringify({
op: OpCodes.Resume,
d: {
token,
session_id,
seq
}
}))
console.timeEnd('Reconnecting Took');
}
console.log(`Got payload ${Object.keys(OpCodes).find(key => OpCodes[key] === payload.op)} (${payload?.t ?? payload.op})`);
console.log('='.repeat(50));
});
}, 1000 * 60 * 5);
A:
If you're getting an InvalidSession payload after trying to reconnect, it means that the session you're trying to resume is no longer valid. This could happen for a number of reasons, such as if the session has timed out or if the connection was closed by Discord.
In order to properly reconnect to Discord, you'll need to get a new session ID and sequence number. The best way to do this is to initiate a new connection to Discord and identify yourself as a new client using the IDENTIFY opcode. Once you've done this, Discord will send you a READY payload with the new session ID and sequence number, which you can use to resume the connection.
Here's an example of how you might implement this in your code:
// Start a new connection to Discord
const ws = new WebSocket("wss://gateway.discord.gg/?v=10&encoding=json");
// Set up a listener for the 'message' event
ws.on('message', (data) => {
const payload = JSON.parse(data.toString());
// If we receive a READY payload, save the session ID and sequence number
if (payload.t == "READY") {
session_id = payload.d.session_id;
seq = payload.d.seq;
}
});
// When the connection is opened, send an IDENTIFY payload
ws.on("open", () => {
ws.send(JSON.stringify({
op: OpCodes.Identify,
d: {
token,
properties: {
os: process.platform,
browser: 'DinkJS',
device: 'DinkJS',
},
intents: 3276799,
}
}))
});
// When you want to reconnect, close the old connection and start a new one
setTimeout(() => {
// Close the old connection with code 4200 (Reconnect)
ws.close(4200, 'Reconnect');
// Start a new connection with the same gateway URL
const newWs = new WebSocket("wss://gateway.discord.gg/?v=10&encoding=json");
// When the new connection is opened, send a RESUME payload
newWs.on("open", () => {
newWs.send(JSON.stringify({
op: OpCodes.Resume,
d: {
token,
session_id,
seq,
}
}))
});
});
You'll need to make sure that you're properly handling the HELLO payload, which contains the heartbeat interval, and sending regular HEARTBEAT messages to keep the connection alive.
|
Cannot Resume Discord Websocket After being disconnected (InvalidSession)
|
I'm working with the Discord Websocket, And I'm trying to have my bot able to reconnect After it gets the '7' opcode (Reconnect) or When I disconnect (I use code 4200).
When I try though I always get a InvalidSession Payload (Opcode is 9)
I've tried a few things, First I tried normally in TS how I do it
{
op: 6,
d: {
token: "My Bot Token",
session_id: "My Session ID",
seq: 5,
}
}
This is an example of the JSON I send after reconnecting with the reconnect url. Here is the reconnect code as well
this.ws = new WebSocket(this.resumeUrl);
this.ws.on('message', (data: Buffer) => {
const payload = JSON.parse(data.toString());
if (payload.op == OpCodes.Hello) {
console.log('Sending resume payload');
this.ws.send(this.payloads.resume());
console.log('Sent resume payload');
}
})
Then Using Postman I also try doing it. Both always get the Invalid Session
Here is a complete example of my code
import { WebSocket } from "ws";
const OpCodes = {
Dispatch: 0,
Heartbeat: 1,
Identify: 2,
PresenceUpdate: 3,
VoiceStateUpdate: 4,
Resume: 6,
Reconnect: 7,
RequestGuildMembers: 8,
InvalidSession: 9,
Hello: 10,
HeartbeatAck: 11,
}
const token = ""
let resume_gateway_url = "";
let session_id = "";
let seq = 0;
let heartbeatInterval = null;
const ws = new WebSocket("wss://gateway.discord.gg/?v=10&encoding=json");
console.log('='.repeat(50));
ws.on('message', (data) => {
const payload = JSON.parse(data.toString());
if (payload?.s) seq = payload.s;
if (payload?.op == OpCodes.Hello) {
heartbeatInterval = setInterval(() => {
ws.send(JSON.stringify({
op: OpCodes.Heartbeat,
d: seq
}))
}, payload.d.heartbeat_interval);
}
if (payload.t == "READY") {
resume_gateway_url = payload.d.resume_gateway_url;
session_id = payload.d.session_id;
}
console.log(`Got payload ${Object.keys(OpCodes).find(key => OpCodes[key] === payload.op)} (${payload?.t ?? payload.op})`);
console.log('='.repeat(50));
});
ws.on("open", () => {
ws.send(JSON.stringify({
op: OpCodes.Identify,
d: {
token,
properties: {
os: process.platform,
browser: 'DinkJS',
device: 'DinkJS',
},
intents: 3276799,
}
}))
});
setTimeout(() => {
clearInterval(heartbeatInterval);
console.time('Reconnecting Took');
ws.close(4200, 'Reconnect');
console.log('Closing old WS with code 4200');
console.log('='.repeat(50));
const newWs = new WebSocket(resume_gateway_url);
console.log(`Opening new WS with the resume gateway url of ${resume_gateway_url} (Seq: ${seq})`);
console.log('='.repeat(50));
newWs.on('message', (data) => {
const payload = JSON.parse(data.toString());
if (payload.op == OpCodes.Hello) {
newWs.send(JSON.stringify({
op: OpCodes.Resume,
d: {
token,
session_id,
seq
}
}))
console.timeEnd('Reconnecting Took');
}
console.log(`Got payload ${Object.keys(OpCodes).find(key => OpCodes[key] === payload.op)} (${payload?.t ?? payload.op})`);
console.log('='.repeat(50));
});
}, 1000 * 60 * 5);
|
[
"If you're getting an InvalidSession payload after trying to reconnect, it means that the session you're trying to resume is no longer valid. This could happen for a number of reasons, such as if the session has timed out or if the connection was closed by Discord.\nIn order to properly reconnect to Discord, you'll need to get a new session ID and sequence number. The best way to do this is to initiate a new connection to Discord and identify yourself as a new client using the IDENTIFY opcode. Once you've done this, Discord will send you a READY payload with the new session ID and sequence number, which you can use to resume the connection.\nHere's an example of how you might implement this in your code:\n// Start a new connection to Discord\nconst ws = new WebSocket(\"wss://gateway.discord.gg/?v=10&encoding=json\");\n\n// Set up a listener for the 'message' event\nws.on('message', (data) => {\n const payload = JSON.parse(data.toString());\n\n // If we receive a READY payload, save the session ID and sequence number\n if (payload.t == \"READY\") {\n session_id = payload.d.session_id;\n seq = payload.d.seq;\n }\n});\n\n// When the connection is opened, send an IDENTIFY payload\nws.on(\"open\", () => {\n ws.send(JSON.stringify({\n op: OpCodes.Identify,\n d: {\n token,\n properties: {\n os: process.platform,\n browser: 'DinkJS',\n device: 'DinkJS',\n },\n intents: 3276799,\n }\n }))\n});\n\n// When you want to reconnect, close the old connection and start a new one\nsetTimeout(() => {\n // Close the old connection with code 4200 (Reconnect)\n ws.close(4200, 'Reconnect');\n\n // Start a new connection with the same gateway URL\n const newWs = new WebSocket(\"wss://gateway.discord.gg/?v=10&encoding=json\");\n\n // When the new connection is opened, send a RESUME payload\n newWs.on(\"open\", () => {\n newWs.send(JSON.stringify({\n op: OpCodes.Resume,\n d: {\n token,\n session_id,\n seq,\n }\n }))\n });\n});\n\nYou'll need to make sure that you're properly handling the HELLO payload, which contains the heartbeat interval, and sending regular HEARTBEAT messages to keep the connection alive.\n"
] |
[
1
] |
[] |
[] |
[
"discord",
"discord.js",
"typescript"
] |
stackoverflow_0074642188_discord_discord.js_typescript.txt
|
Q:
Toggle Boolean value based on a triple state filter
I'm having a brain melting time with this. For some reason I thought it would be easier, but I'm struggling with this.
I have an application that a user can config before running based on desired parameters the user wants to test. There are 3 filters that the user can either turn on, turn off, or toggle.
If the user wants a filter on, he will set the filter in the configuration file to True. If the user wants it off, he sets it to False. If however, the user wishes to run the test with the filter on and than again off, he can set the configuration file to toggle
here are examples of filter1, filter2, and filter3 stored in a list.
toggle_state = ["toggle", "toggle", False]
toggle_state = ["toggle", True, "toggle"]
toggle_state = [False, "toggle", True]
toggle_state = [True, False, False]
...
Any combination should be available for testing purposes.
I have implemented nested while loops to accomplish what I'm attempting to do. However, I have had no real success. I have been able to make it work, with just toggle for all three filters.
I stripped out the functions related to my in a simple MUC script below.
#####CODE BLOCK 1######
import time
def toggle_filters():
toggle_state = ["toggle", "toggle", "toggle"]
# toggle_state = ["toggle", "toggle", False]
# toggle_state = ["toggle", "toggle", True]
# toggle_state = ["toggle", False, "toggle"]
# toggle_state = ["toggle", True, "toggle"]
# toggle_state = [False, "toggle", "toggle"]
# toggle_state = [True, "toggle", "toggle"]
filter_state = init_filters(toggle_state)
idx = 2
complete = 2
terminate = False
while True:
print(f"\t{filter_state[0]:<5}{filter_state[1]:<5}{filter_state[2]:<5}")
### do something here with the filters ###
while True:
if toggle_state[idx] == "toggle" and not filter_state[idx]:
filter_state[idx] = True
break
elif complete < -1:
terminate = True
break
elif toggle_state[idx] == "toggle" and idx == len(toggle_state) - 1:
filter_state[idx] = False
if complete != 0:
filter_state[complete] = False
complete -= 1
if complete < 0:
idx = 1
else:
idx = complete
continue
elif toggle_state[idx] == "toggle" and idx != len(toggle_state) - 1:
if complete == 0 and idx == 0:
idx += 1
idx += 1
if terminate:
break
def init_filters(toggle_state):
"""..."""
filters = []
for idx in toggle_state:
if idx == "toggle":
filters.append(False)
else:
filters.append(idx)
return filters
if __name__ == "__main__":
toggle_filters()
however, when I've attempted to add in static values for the filters, it all goes to hell. I updated the toggle_filter() function to start looking for filters that are not set to toggle.
####CODE BLOCK 2####
import time
def toggle_filters():
# toggle_state = ["toggle", "toggle", "toggle"]
toggle_state = ["toggle", "toggle", False]
# toggle_state = ["toggle", "toggle", True]
# toggle_state = ["toggle", False, "toggle"]
# toggle_state = ["toggle", True, "toggle"]
# toggle_state = [False, "toggle", "toggle"]
# toggle_state = [True, "toggle", "toggle"]
filter_state = init_filters(toggle_state)
idx = 2
complete = 2
terminate = False
while True:
print(f"\t{filter_state[0]:<5}{filter_state[1]:<5}{filter_state[2]:<5}")
### do something here with the filters ###
while True:
if toggle_state[idx] == "toggle" and not filter_state[idx]:
filter_state[idx] = True
break
elif complete < -1:
terminate = True
break
elif toggle_state[idx] == "toggle" and idx == len(toggle_state) - 1:
filter_state[idx] = False
if complete != 0:
filter_state[complete] = False
complete -= 1
if complete < 0:
idx = 1
else:
idx = complete
continue
elif toggle_state[idx] == "toggle" and idx != len(toggle_state) - 1:
if complete == 0 and idx == 0:
idx += 1
idx += 1
elif toggle_state[idx] != "toggle" and idx == len(toggle_state) - 1:
if complete != 0:
pass
complete -= 1
if complete < 0:
idx = 1
else:
idx = complete
continue
elif toggle_state[idx] != "toggle" and idx != len(toggle_state) - 1:
if complete == 2 and idx == 2:
complete = 1
idx = complete
if complete == 1 and idx == 1:
complete = 0
idx = complete
else:
idx -= 1
if terminate:
break
def init_filters(toggle_state):
"""..."""
filters = []
for idx in toggle_state:
if idx == "toggle":
filters.append(False)
else:
filters.append(idx)
return filters
if __name__ == "__main__":
toggle_filters()
Which fails each time, and honestly I imagine I'm approaching this from the wrong direction, just based on the shear number of conditions I have to set. Does anyone have any suggestions as to what I should be looking at?
UPDATE:
if you take the first block of code, it will run as is. The output will look like a truth table.
0 0 0
0 0 1
0 1 0
0 1 1
1 0 0
1 0 1
1 1 0
1 1 1
This is when you set the filters to all toggle.
I've updated the second code block as a complete MUC.
here the output looks like this
0 0 0
0 1 0
1 1 0
however it should look like this
0 0 0
0 1 0
1 0 0
1 1 0
depending on which filter you set static, the ouputs are not correct.
A:
This gives the same output with less complication. itertools.product is a function that gives you all the combinations of each state listed. A TOGGLE filter can be zero or one, while a FALSE or TRUE state only provides a zero or one state, respectively.
Does this manage the states you want?
import itertools
TOGGLE = [0,1]
FALSE = [0]
TRUE = [1]
def toggle_filters(toggle_state):
for state in itertools.product(*toggle_state):
print(*state)
toggle_filters([TOGGLE, TOGGLE, TOGGLE])
print()
toggle_filters([TOGGLE, TOGGLE, FALSE])
Output:
0 0 0
0 0 1
0 1 0
0 1 1
1 0 0
1 0 1
1 1 0
1 1 1
0 0 0
0 1 0
1 0 0
1 1 0
|
Toggle Boolean value based on a triple state filter
|
I'm having a brain melting time with this. For some reason I thought it would be easier, but I'm struggling with this.
I have an application that a user can config before running based on desired parameters the user wants to test. There are 3 filters that the user can either turn on, turn off, or toggle.
If the user wants a filter on, he will set the filter in the configuration file to True. If the user wants it off, he sets it to False. If however, the user wishes to run the test with the filter on and than again off, he can set the configuration file to toggle
here are examples of filter1, filter2, and filter3 stored in a list.
toggle_state = ["toggle", "toggle", False]
toggle_state = ["toggle", True, "toggle"]
toggle_state = [False, "toggle", True]
toggle_state = [True, False, False]
...
Any combination should be available for testing purposes.
I have implemented nested while loops to accomplish what I'm attempting to do. However, I have had no real success. I have been able to make it work, with just toggle for all three filters.
I stripped out the functions related to my in a simple MUC script below.
#####CODE BLOCK 1######
import time
def toggle_filters():
toggle_state = ["toggle", "toggle", "toggle"]
# toggle_state = ["toggle", "toggle", False]
# toggle_state = ["toggle", "toggle", True]
# toggle_state = ["toggle", False, "toggle"]
# toggle_state = ["toggle", True, "toggle"]
# toggle_state = [False, "toggle", "toggle"]
# toggle_state = [True, "toggle", "toggle"]
filter_state = init_filters(toggle_state)
idx = 2
complete = 2
terminate = False
while True:
print(f"\t{filter_state[0]:<5}{filter_state[1]:<5}{filter_state[2]:<5}")
### do something here with the filters ###
while True:
if toggle_state[idx] == "toggle" and not filter_state[idx]:
filter_state[idx] = True
break
elif complete < -1:
terminate = True
break
elif toggle_state[idx] == "toggle" and idx == len(toggle_state) - 1:
filter_state[idx] = False
if complete != 0:
filter_state[complete] = False
complete -= 1
if complete < 0:
idx = 1
else:
idx = complete
continue
elif toggle_state[idx] == "toggle" and idx != len(toggle_state) - 1:
if complete == 0 and idx == 0:
idx += 1
idx += 1
if terminate:
break
def init_filters(toggle_state):
"""..."""
filters = []
for idx in toggle_state:
if idx == "toggle":
filters.append(False)
else:
filters.append(idx)
return filters
if __name__ == "__main__":
toggle_filters()
however, when I've attempted to add in static values for the filters, it all goes to hell. I updated the toggle_filter() function to start looking for filters that are not set to toggle.
####CODE BLOCK 2####
import time
def toggle_filters():
# toggle_state = ["toggle", "toggle", "toggle"]
toggle_state = ["toggle", "toggle", False]
# toggle_state = ["toggle", "toggle", True]
# toggle_state = ["toggle", False, "toggle"]
# toggle_state = ["toggle", True, "toggle"]
# toggle_state = [False, "toggle", "toggle"]
# toggle_state = [True, "toggle", "toggle"]
filter_state = init_filters(toggle_state)
idx = 2
complete = 2
terminate = False
while True:
print(f"\t{filter_state[0]:<5}{filter_state[1]:<5}{filter_state[2]:<5}")
### do something here with the filters ###
while True:
if toggle_state[idx] == "toggle" and not filter_state[idx]:
filter_state[idx] = True
break
elif complete < -1:
terminate = True
break
elif toggle_state[idx] == "toggle" and idx == len(toggle_state) - 1:
filter_state[idx] = False
if complete != 0:
filter_state[complete] = False
complete -= 1
if complete < 0:
idx = 1
else:
idx = complete
continue
elif toggle_state[idx] == "toggle" and idx != len(toggle_state) - 1:
if complete == 0 and idx == 0:
idx += 1
idx += 1
elif toggle_state[idx] != "toggle" and idx == len(toggle_state) - 1:
if complete != 0:
pass
complete -= 1
if complete < 0:
idx = 1
else:
idx = complete
continue
elif toggle_state[idx] != "toggle" and idx != len(toggle_state) - 1:
if complete == 2 and idx == 2:
complete = 1
idx = complete
if complete == 1 and idx == 1:
complete = 0
idx = complete
else:
idx -= 1
if terminate:
break
def init_filters(toggle_state):
"""..."""
filters = []
for idx in toggle_state:
if idx == "toggle":
filters.append(False)
else:
filters.append(idx)
return filters
if __name__ == "__main__":
toggle_filters()
Which fails each time, and honestly I imagine I'm approaching this from the wrong direction, just based on the shear number of conditions I have to set. Does anyone have any suggestions as to what I should be looking at?
UPDATE:
if you take the first block of code, it will run as is. The output will look like a truth table.
0 0 0
0 0 1
0 1 0
0 1 1
1 0 0
1 0 1
1 1 0
1 1 1
This is when you set the filters to all toggle.
I've updated the second code block as a complete MUC.
here the output looks like this
0 0 0
0 1 0
1 1 0
however it should look like this
0 0 0
0 1 0
1 0 0
1 1 0
depending on which filter you set static, the ouputs are not correct.
|
[
"This gives the same output with less complication. itertools.product is a function that gives you all the combinations of each state listed. A TOGGLE filter can be zero or one, while a FALSE or TRUE state only provides a zero or one state, respectively.\nDoes this manage the states you want?\nimport itertools\n\nTOGGLE = [0,1]\nFALSE = [0]\nTRUE = [1]\n\ndef toggle_filters(toggle_state):\n for state in itertools.product(*toggle_state):\n print(*state)\n\ntoggle_filters([TOGGLE, TOGGLE, TOGGLE])\nprint()\ntoggle_filters([TOGGLE, TOGGLE, FALSE])\n\nOutput:\n0 0 0\n0 0 1\n0 1 0\n0 1 1\n1 0 0\n1 0 1\n1 1 0\n1 1 1\n\n0 0 0\n0 1 0\n1 0 0\n1 1 0\n\n"
] |
[
2
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074661476_python.txt
|
Q:
Remove Columns with missing values above a threshold pandas
I am doing data preprocessing and want to remove features/columns which have more than say 10% missing values.
I have made the below code:
df_missing=df.isna()
result=df_missing.sum()/len(df)
result
Default 0.010066
Income 0.142857
Age 0.109090
Name 0.047000
Gender 0.000000
Type of job 0.200000
Amt of credit 0.850090
Years employed 0.009003
dtype: float64
I want df to have columns only where there are no missing values above 10%.
Expected output:
df
Default Name Gender Years employed
(columns where there were missing values greater than 10% are removed.)
I have tried
result.iloc[:,0]
IndexingError: Too many indexers
Please help
A:
Because division of sum by length is mean, you can instead df_missing.sum()/len(df) use df_missing.mean():
result = df.isna().mean()
Then filter by DataFrame.loc with : for all rows and columns by mask:
df = df.loc[:,result > .1]
A:
it should be df = df.loc[:,result < .1] as the user only want to keep the columns that have less than 10% of the rows missing
A:
pandas has built in methods for such things:
df_clean = df.dropna(axis=1, thresh=(len(df)*.1), inplace=False)
Or if you don't want to create an extra dataframe object you can do it inplace:
df.dropna(axis=1, thresh=(len(df)*.1), inplace=True)
|
Remove Columns with missing values above a threshold pandas
|
I am doing data preprocessing and want to remove features/columns which have more than say 10% missing values.
I have made the below code:
df_missing=df.isna()
result=df_missing.sum()/len(df)
result
Default 0.010066
Income 0.142857
Age 0.109090
Name 0.047000
Gender 0.000000
Type of job 0.200000
Amt of credit 0.850090
Years employed 0.009003
dtype: float64
I want df to have columns only where there are no missing values above 10%.
Expected output:
df
Default Name Gender Years employed
(columns where there were missing values greater than 10% are removed.)
I have tried
result.iloc[:,0]
IndexingError: Too many indexers
Please help
|
[
"Because division of sum by length is mean, you can instead df_missing.sum()/len(df) use df_missing.mean():\nresult = df.isna().mean()\n\nThen filter by DataFrame.loc with : for all rows and columns by mask:\ndf = df.loc[:,result > .1]\n\n",
"it should be df = df.loc[:,result < .1] as the user only want to keep the columns that have less than 10% of the rows missing\n",
"pandas has built in methods for such things:\ndf_clean = df.dropna(axis=1, thresh=(len(df)*.1), inplace=False)\nOr if you don't want to create an extra dataframe object you can do it inplace:\ndf.dropna(axis=1, thresh=(len(df)*.1), inplace=True)\n"
] |
[
4,
1,
0
] |
[] |
[] |
[
"pandas",
"python"
] |
stackoverflow_0060450808_pandas_python.txt
|
Q:
Flutter "CircularProgressIndicator" widgets breaks widgets alignement
I'm currently building a scrapping a scrapping application using flutter.
I encounter a problem when I try to display a loader while scrapping, the apparition of the loader seems to totally break the Cross Axis Alignement of my Columns.
First my application looks like this :
https://i.stack.imgur.com/6Tbz7.png
Then, when I hit "scrap button", I update my state in order to display a loader, and this happens :
https://i.stack.imgur.com/0MIi6.png
Here are the two pieces of code rendering the application :
My main class rendering a Flutter logo & my main widget
class _MyHomePageState extends State<MyHomePage> {
@override
Widget build(BuildContext context) {
return Scaffold(
body: Container(
child: Column(
mainAxisAlignment: MainAxisAlignment.start,
crossAxisAlignment: CrossAxisAlignment.center,
children: [
FlutterLogo(
size: 400,
style: FlutterLogoStyle.horizontal,
),
ScrapBody(),
],
),
),
);
}
}
My main Widget containing rendering management logic as follow
Widget build(BuildContext context) {
return Column(
crossAxisAlignment: CrossAxisAlignment.center,
children: [
ScrapForm(
scrap: scrap,
exportToCsv: exportToCsv,
isScrapSuccess: isScrapSuccess,
),
isLoading
// ? Container()
? Column(
children: [
CircularProgressIndicator(),
Text(
"Scrapping page ${pageScrapIndex.toString()} on ${totalPages.toString()}",
),
],
)
: Column(
children: [
ScrapResult(
reviewerListResult: reviewerList,
),
],
),
],
);
}
As you can see, I tried to constraint the widgets that composes my Column with " crossAxisAlignment: CrossAxisAlignment.center" in both parts, but it doesn't work, the widgets keeps goes on the left when I render my loader, and only when I render my loader.
After many searches about the Circuler loader and a look in the widget core,I didn't find anything that could explain my case, any help would be so
appreciated, thanks in advance.
A:
try to wrap your base column with Container and give it's width as double.infinity. and also add crossAxisAlignment: CrossAxisAlignment.center for your circular progress dialog's Column if the first solution doesn't work.
|
Flutter "CircularProgressIndicator" widgets breaks widgets alignement
|
I'm currently building a scrapping a scrapping application using flutter.
I encounter a problem when I try to display a loader while scrapping, the apparition of the loader seems to totally break the Cross Axis Alignement of my Columns.
First my application looks like this :
https://i.stack.imgur.com/6Tbz7.png
Then, when I hit "scrap button", I update my state in order to display a loader, and this happens :
https://i.stack.imgur.com/0MIi6.png
Here are the two pieces of code rendering the application :
My main class rendering a Flutter logo & my main widget
class _MyHomePageState extends State<MyHomePage> {
@override
Widget build(BuildContext context) {
return Scaffold(
body: Container(
child: Column(
mainAxisAlignment: MainAxisAlignment.start,
crossAxisAlignment: CrossAxisAlignment.center,
children: [
FlutterLogo(
size: 400,
style: FlutterLogoStyle.horizontal,
),
ScrapBody(),
],
),
),
);
}
}
My main Widget containing rendering management logic as follow
Widget build(BuildContext context) {
return Column(
crossAxisAlignment: CrossAxisAlignment.center,
children: [
ScrapForm(
scrap: scrap,
exportToCsv: exportToCsv,
isScrapSuccess: isScrapSuccess,
),
isLoading
// ? Container()
? Column(
children: [
CircularProgressIndicator(),
Text(
"Scrapping page ${pageScrapIndex.toString()} on ${totalPages.toString()}",
),
],
)
: Column(
children: [
ScrapResult(
reviewerListResult: reviewerList,
),
],
),
],
);
}
As you can see, I tried to constraint the widgets that composes my Column with " crossAxisAlignment: CrossAxisAlignment.center" in both parts, but it doesn't work, the widgets keeps goes on the left when I render my loader, and only when I render my loader.
After many searches about the Circuler loader and a look in the widget core,I didn't find anything that could explain my case, any help would be so
appreciated, thanks in advance.
|
[
"try to wrap your base column with Container and give it's width as double.infinity. and also add crossAxisAlignment: CrossAxisAlignment.center for your circular progress dialog's Column if the first solution doesn't work.\n"
] |
[
0
] |
[] |
[] |
[
"dart",
"flutter"
] |
stackoverflow_0074661839_dart_flutter.txt
|
Q:
Regex for string to contain at least one letter and number
This regex expression will match the specified number of word characters, with a space in either side:
(?<=\s)(?:\w){12}(?=\s)
How can I modify this expression so that it returns only the string containing the mixed-alphanumeric result containing at least one letter and at least one number? Here is the current Regex Demo.
A:
Try:
\b(?=[^\s]*\d)(?=[^\s]*[a-zA-Z])\w{12}\b
Regex demo.
\b - word boundary
(?=[^\s]*\d) - continue matching if ahead is a number preceded with any amount of non-space characters.
(?=[^\s]*[a-zA-Z]) - the same with letters
\w{12} - match 12 word characters
\b - word boundary
|
Regex for string to contain at least one letter and number
|
This regex expression will match the specified number of word characters, with a space in either side:
(?<=\s)(?:\w){12}(?=\s)
How can I modify this expression so that it returns only the string containing the mixed-alphanumeric result containing at least one letter and at least one number? Here is the current Regex Demo.
|
[
"Try:\n\\b(?=[^\\s]*\\d)(?=[^\\s]*[a-zA-Z])\\w{12}\\b\n\nRegex demo.\n\n\\b - word boundary\n(?=[^\\s]*\\d) - continue matching if ahead is a number preceded with any amount of non-space characters.\n(?=[^\\s]*[a-zA-Z]) - the same with letters\n\\w{12} - match 12 word characters\n\\b - word boundary\n"
] |
[
2
] |
[] |
[] |
[
"regex"
] |
stackoverflow_0074661974_regex.txt
|
Q:
Date is displaying in different time zone when parsing a string using SimpleDateFormat
public class ParseDate {
public static final String DATE_FORMAT = "yyyy-MM-dd";
public static SimpleDateFormat DateFormatter = new SimpleDateFormat(DATE_FORMAT);
public static void main(String[] args) {
try {
System.out.println("Converting 2020-12-31 to "+DateFormatter.parse("2020-12-31"));
System.out.println("Converting 2020-06-30 to "+DateFormatter.parse("2020-06-30"));
} catch (ParseException e) {
e.printStackTrace();
}
}
}
Output:
Converting 2020-12-31 to Thu Dec 31 00:00:00 GMT 2020
Converting 2020-06-30 to Tue Jun 30 00:00:00 BST 2020
If I execute this code, I am getting different time zones (GMT and BST) as output.
How to get same time zone of Date object as output irrespective of different input strings.
When I was executed the code, I expect the date object should contain same time zone (either GMT or BST).
I have tried with the below time zone:
A:
java.util.Date is not a true date-time object; rather, it just represents the number of milliseconds from January 1, 1970, 00:00:00 GMT. The Date#toString returns this millisecond value into a string applying the default timezone which is Europe/London in your case and therefore it prints GMT and BST because of DST.
java.time
The java.time API, released with Java-8 in March 2014, supplanted the error-prone legacy date-time API. Since then, using this modern date-time API has been strongly recommended.
Demo using modern date-time API
import java.time.LocalDate;
import java.time.ZoneId;
import java.time.ZonedDateTime;
import java.time.format.DateTimeFormatter;
import java.util.Locale;
public class Main {
public static void main(String[] args) {
ZoneId zoneId = ZoneId.of("Europe/London");
ZonedDateTime zdt1 = LocalDate.parse("2020-12-31").atStartOfDay(zoneId);
ZonedDateTime zdt2 = LocalDate.parse("2020-06-30").atStartOfDay(zoneId);
System.out.println("Converting 2020-12-31 to " + zdt1);
System.out.println("Converting 2020-06-30 to " + zdt2);
// Date#toString like format
DateTimeFormatter formatter = DateTimeFormatter.ofPattern("EEE MMM dd HH:mm:ss z uuuu", Locale.ENGLISH);
System.out.println(zdt1.format(formatter));
System.out.println(zdt2.format(formatter));
}
}
Output:
Converting 2020-12-31 to 2020-12-31T00:00Z[Europe/London]
Converting 2020-06-30 to 2020-06-30T00:00+01:00[Europe/London]
Thu Dec 31 00:00:00 GMT 2020
Tue Jun 30 00:00:00 BST 2020
Note that since java.time API is based on ISO 8601 standard, you do not need to specify a parser ( DateTimeFormatter in the case of java.time API) to parse a date string which is already in this format.
Learn more about the modern Date-Time API from Trail: Date Time.
|
Date is displaying in different time zone when parsing a string using SimpleDateFormat
|
public class ParseDate {
public static final String DATE_FORMAT = "yyyy-MM-dd";
public static SimpleDateFormat DateFormatter = new SimpleDateFormat(DATE_FORMAT);
public static void main(String[] args) {
try {
System.out.println("Converting 2020-12-31 to "+DateFormatter.parse("2020-12-31"));
System.out.println("Converting 2020-06-30 to "+DateFormatter.parse("2020-06-30"));
} catch (ParseException e) {
e.printStackTrace();
}
}
}
Output:
Converting 2020-12-31 to Thu Dec 31 00:00:00 GMT 2020
Converting 2020-06-30 to Tue Jun 30 00:00:00 BST 2020
If I execute this code, I am getting different time zones (GMT and BST) as output.
How to get same time zone of Date object as output irrespective of different input strings.
When I was executed the code, I expect the date object should contain same time zone (either GMT or BST).
I have tried with the below time zone:
|
[
"java.util.Date is not a true date-time object; rather, it just represents the number of milliseconds from January 1, 1970, 00:00:00 GMT. The Date#toString returns this millisecond value into a string applying the default timezone which is Europe/London in your case and therefore it prints GMT and BST because of DST.\njava.time\nThe java.time API, released with Java-8 in March 2014, supplanted the error-prone legacy date-time API. Since then, using this modern date-time API has been strongly recommended.\nDemo using modern date-time API\nimport java.time.LocalDate;\nimport java.time.ZoneId;\nimport java.time.ZonedDateTime;\nimport java.time.format.DateTimeFormatter;\nimport java.util.Locale;\n\npublic class Main {\n public static void main(String[] args) {\n ZoneId zoneId = ZoneId.of(\"Europe/London\");\n ZonedDateTime zdt1 = LocalDate.parse(\"2020-12-31\").atStartOfDay(zoneId);\n ZonedDateTime zdt2 = LocalDate.parse(\"2020-06-30\").atStartOfDay(zoneId);\n System.out.println(\"Converting 2020-12-31 to \" + zdt1);\n System.out.println(\"Converting 2020-06-30 to \" + zdt2);\n\n // Date#toString like format\n DateTimeFormatter formatter = DateTimeFormatter.ofPattern(\"EEE MMM dd HH:mm:ss z uuuu\", Locale.ENGLISH);\n System.out.println(zdt1.format(formatter));\n System.out.println(zdt2.format(formatter));\n }\n}\n\nOutput:\nConverting 2020-12-31 to 2020-12-31T00:00Z[Europe/London]\nConverting 2020-06-30 to 2020-06-30T00:00+01:00[Europe/London]\nThu Dec 31 00:00:00 GMT 2020\nTue Jun 30 00:00:00 BST 2020\n\nNote that since java.time API is based on ISO 8601 standard, you do not need to specify a parser ( DateTimeFormatter in the case of java.time API) to parse a date string which is already in this format.\nLearn more about the modern Date-Time API from Trail: Date Time.\n"
] |
[
1
] |
[] |
[] |
[
"date_parsing",
"java",
"simpledateformat",
"timezone"
] |
stackoverflow_0074656330_date_parsing_java_simpledateformat_timezone.txt
|
Q:
Which Chrome keyboard shortcuts cannot be overridden with Javascript?
You can preventDefault() on Chrome shortcuts with JavaScript, but you can't do it with all of them.
Ctrl + S and Ctrl + F you can override.
Ctrl + W you cannot. This makes sense.
Ctrl + L though I was surprised to find you also cannot override though.
What shortcuts are overridable and which aren't in Chrome?
A:
Chrome keyboard shortcuts cannot be overridden with JavaScript. Keyboard shortcuts are a system-level feature that allows users to quickly access certain functions using keyboard keys or key combinations. These shortcuts are implemented at the operating system level, and they cannot be changed or overridden by JavaScript code running in a web page.
For example, the Ctrl + T shortcut is used to open a new tab in Chrome, and this shortcut cannot be overridden by JavaScript. If you try to bind this shortcut to a different action using JavaScript, it will not work. Similarly as you've said in your question, the Ctrl + W shortcut is used to close the current tab, and this shortcut cannot be overridden either.
However, you can use JavaScript to bind custom keyboard shortcuts to your web page. For example, you can use the addEventListener() method to listen for keyboard events and trigger custom actions when certain keys or key combinations are pressed. This allows you to create your own custom keyboard shortcuts that are specific to your web page, but you cannot override the default system-level shortcuts provided by Chrome.
Here is an example of using the addEventListener() method to bind a custom keyboard shortcut to a web page:
document.addEventListener('keydown', function (event) {
if (event.ctrlKey && event.keyCode == 83) {
// Ctrl + S was pressed - do something
}
});
|
Which Chrome keyboard shortcuts cannot be overridden with Javascript?
|
You can preventDefault() on Chrome shortcuts with JavaScript, but you can't do it with all of them.
Ctrl + S and Ctrl + F you can override.
Ctrl + W you cannot. This makes sense.
Ctrl + L though I was surprised to find you also cannot override though.
What shortcuts are overridable and which aren't in Chrome?
|
[
"Chrome keyboard shortcuts cannot be overridden with JavaScript. Keyboard shortcuts are a system-level feature that allows users to quickly access certain functions using keyboard keys or key combinations. These shortcuts are implemented at the operating system level, and they cannot be changed or overridden by JavaScript code running in a web page.\nFor example, the Ctrl + T shortcut is used to open a new tab in Chrome, and this shortcut cannot be overridden by JavaScript. If you try to bind this shortcut to a different action using JavaScript, it will not work. Similarly as you've said in your question, the Ctrl + W shortcut is used to close the current tab, and this shortcut cannot be overridden either.\nHowever, you can use JavaScript to bind custom keyboard shortcuts to your web page. For example, you can use the addEventListener() method to listen for keyboard events and trigger custom actions when certain keys or key combinations are pressed. This allows you to create your own custom keyboard shortcuts that are specific to your web page, but you cannot override the default system-level shortcuts provided by Chrome.\nHere is an example of using the addEventListener() method to bind a custom keyboard shortcut to a web page:\ndocument.addEventListener('keydown', function (event) {\n if (event.ctrlKey && event.keyCode == 83) {\n // Ctrl + S was pressed - do something\n }\n});\n\n"
] |
[
0
] |
[] |
[] |
[
"chromium",
"google_chrome",
"javascript"
] |
stackoverflow_0044998250_chromium_google_chrome_javascript.txt
|
Q:
How to generate a list with every monday, between two dates, and exclude that some a specific list using pandas
I want to generate a dataframe with pandas where one of the columns is filled with all mondays between to dates. But I need to exclude some mondays that are in a specific list. I could generate the column with the mondays, but I could find how to remove that mondays in the given list.
I generate the mondays using:
import pandas as pd
st=pd.to_datetime('8/22/2022')
ed=pd.to_datetime('12/22/2022')
a1=pd.date_range(start=st,end=ed, freq='W-MON')
But I would like to exclude the mondays that are in this list
fer=pd.to_datetime(['09/07/2022','10/12/2022','10/15/2022','10/28/2022','11/01/2022','11/14/2022','11/15/2022','11/20/2022'])
I was not able to find the solution online.
A:
IIUC, you can use a negative pandas.Index.isin :
a1= a1[~a1.isin(fer)]
# Output :
print(a1)
DatetimeIndex(['2022-08-22', '2022-08-29', '2022-09-05', '2022-09-12',
'2022-09-19', '2022-09-26', '2022-10-03', '2022-10-10',
'2022-10-17', '2022-10-24', '2022-10-31', '2022-11-07',
'2022-11-21', '2022-11-28', '2022-12-05', '2022-12-12',
'2022-12-19'],
dtype='datetime64[ns]', freq=None)
A:
Here's a code snippet that hopefully answers your question. The code snippet uses the pandas package to remove a list of blacklisted dates from a list of dates. It does this by first generating a list of every Monday between a specified start and end date using the generate_mondays function. It then defines a list of blacklisted Mondays and converts both lists to pandas Series objects.
Next, the code uses the Series.isin() method to create a Boolean mask indicating which dates in the mondays Series are not in the blacklisted_mondays Series. This mask is then used to filter the mondays Series, and the resulting Series is converted to a list using the tolist() method.
The resulting list of non-blacklisted Mondays can be accessed by calling the non_blacklisted_mondays variable, which is the final line of the code snippet. This variable contains a list of all the Mondays between the start and end dates, with the blacklisted Mondays removed.
# Import the date and timedelta classes from the datetime module
from datetime import date, timedelta
# Import the pandas package
import pandas as pd
# Function to generate a list of every Monday between two dates
def generate_mondays(start_date, end_date):
# Create a variable to hold the list of Mondays
mondays = []
# Create a variable to hold the current date, starting with the start date
current_date = start_date
# Calculate the number of days between the start date and the first Monday
# We use (7 - start_date.weekday()) % 7 to find the number of days to the
# next Monday, and then subtract one to get the number of days to the first
# Monday
days_to_first_monday = (7 - start_date.weekday()) % 7 - 1
# Add the number of days to the first Monday to the current date to move to
# the first Monday
current_date += timedelta(days=days_to_first_monday)
# Loop until we reach the end date
while current_date <= end_date:
# Append the current date to the list of Mondays
mondays.append(current_date)
# Move to the next Monday by adding 7 days
current_date += timedelta(days=7)
# Return the list of Mondays
return mondays
# Set the start and end dates
start_date = date(2022, 1, 1)
end_date = date(2022, 12, 31)
# Generate a list of every Monday between the start and end dates
mondays = generate_mondays(start_date, end_date)
# Define a list of blacklisted Mondays
blacklisted_mondays = [
date(2022, 1, 10),
date(2022, 2, 14),
date(2022, 3, 21),
]
# Convert the list of mondays and the list of blacklisted mondays to pandas
# Series objects
mondays_series = pd.Series(mondays)
blacklisted_mondays_series = pd.Series(blacklisted_mondays)
# Use the pandas Series.isin() method to create a Boolean mask indicating
# which dates in the mondays Series are not in the blacklisted_mondays Series
mask = ~mondays_series.isin(blacklisted_mondays_series)
# Use the mask to filter the mondays Series and convert the resulting Series
# to a list
non_blacklisted_mondays = mondays_series[mask].tolist()
# Print the resulting list of non-blacklisted Mondays
print(non_blacklisted_mondays)
|
How to generate a list with every monday, between two dates, and exclude that some a specific list using pandas
|
I want to generate a dataframe with pandas where one of the columns is filled with all mondays between to dates. But I need to exclude some mondays that are in a specific list. I could generate the column with the mondays, but I could find how to remove that mondays in the given list.
I generate the mondays using:
import pandas as pd
st=pd.to_datetime('8/22/2022')
ed=pd.to_datetime('12/22/2022')
a1=pd.date_range(start=st,end=ed, freq='W-MON')
But I would like to exclude the mondays that are in this list
fer=pd.to_datetime(['09/07/2022','10/12/2022','10/15/2022','10/28/2022','11/01/2022','11/14/2022','11/15/2022','11/20/2022'])
I was not able to find the solution online.
|
[
"IIUC, you can use a negative pandas.Index.isin :\na1= a1[~a1.isin(fer)]\n\n# Output :\nprint(a1)\n\nDatetimeIndex(['2022-08-22', '2022-08-29', '2022-09-05', '2022-09-12',\n '2022-09-19', '2022-09-26', '2022-10-03', '2022-10-10',\n '2022-10-17', '2022-10-24', '2022-10-31', '2022-11-07',\n '2022-11-21', '2022-11-28', '2022-12-05', '2022-12-12',\n '2022-12-19'],\n dtype='datetime64[ns]', freq=None)\n\n",
"Here's a code snippet that hopefully answers your question. The code snippet uses the pandas package to remove a list of blacklisted dates from a list of dates. It does this by first generating a list of every Monday between a specified start and end date using the generate_mondays function. It then defines a list of blacklisted Mondays and converts both lists to pandas Series objects.\nNext, the code uses the Series.isin() method to create a Boolean mask indicating which dates in the mondays Series are not in the blacklisted_mondays Series. This mask is then used to filter the mondays Series, and the resulting Series is converted to a list using the tolist() method.\nThe resulting list of non-blacklisted Mondays can be accessed by calling the non_blacklisted_mondays variable, which is the final line of the code snippet. This variable contains a list of all the Mondays between the start and end dates, with the blacklisted Mondays removed.\n# Import the date and timedelta classes from the datetime module\nfrom datetime import date, timedelta\n\n# Import the pandas package\nimport pandas as pd\n\n# Function to generate a list of every Monday between two dates\ndef generate_mondays(start_date, end_date):\n # Create a variable to hold the list of Mondays\n mondays = []\n\n # Create a variable to hold the current date, starting with the start date\n current_date = start_date\n\n # Calculate the number of days between the start date and the first Monday\n # We use (7 - start_date.weekday()) % 7 to find the number of days to the\n # next Monday, and then subtract one to get the number of days to the first\n # Monday\n days_to_first_monday = (7 - start_date.weekday()) % 7 - 1\n\n # Add the number of days to the first Monday to the current date to move to\n # the first Monday\n current_date += timedelta(days=days_to_first_monday)\n\n # Loop until we reach the end date\n while current_date <= end_date:\n # Append the current date to the list of Mondays\n mondays.append(current_date)\n\n # Move to the next Monday by adding 7 days\n current_date += timedelta(days=7)\n\n # Return the list of Mondays\n return mondays\n\n# Set the start and end dates\nstart_date = date(2022, 1, 1)\nend_date = date(2022, 12, 31)\n\n# Generate a list of every Monday between the start and end dates\nmondays = generate_mondays(start_date, end_date)\n\n# Define a list of blacklisted Mondays\nblacklisted_mondays = [\n date(2022, 1, 10),\n date(2022, 2, 14),\n date(2022, 3, 21),\n]\n\n# Convert the list of mondays and the list of blacklisted mondays to pandas\n# Series objects\nmondays_series = pd.Series(mondays)\nblacklisted_mondays_series = pd.Series(blacklisted_mondays)\n\n# Use the pandas Series.isin() method to create a Boolean mask indicating\n# which dates in the mondays Series are not in the blacklisted_mondays Series\nmask = ~mondays_series.isin(blacklisted_mondays_series)\n\n# Use the mask to filter the mondays Series and convert the resulting Series\n# to a list\nnon_blacklisted_mondays = mondays_series[mask].tolist()\n\n# Print the resulting list of non-blacklisted Mondays\nprint(non_blacklisted_mondays)\n\n"
] |
[
1,
1
] |
[] |
[] |
[
"datetime",
"pandas",
"python"
] |
stackoverflow_0074661985_datetime_pandas_python.txt
|
Q:
flask template not rendering as expected
Expected output is 'not detected' but I get 'no error' on get and post. Why?
index.html
{% if error %}
<p>{{ error }}</p>
{% else %}
<p>no error</p>
{% endif %}
main.py
@app.route('/', methods=['GET', 'POST'])
def index():
if request.method == 'GET':
print('get')
return render_template('index.html')
elif request.method == 'POST':
print('post')
post_data = request.get_json(force=True)
if post_data['message'] == False:
print('false')
print('not detected')
return render_template('index.html', error='not detected')
edit:
not sure if this is what's causing the errors.
script.js
window.onload = (event) => {
if (!window.ethereum) {
console.log('error - not detected');
fetch(`${window.origin}/`, {
method: 'POST',
headers: {'content-type': 'application/json'},
body: JSON.stringify({
'message': false
})
});
} else {
console.log('detected');
}
};
A:
The problem is with your js code. As I can see, you make fetch call providing parameters and not providing callback for response processing.
It should be:
fetch(`${window.origin}/`, {
method: 'POST',
headers: {'content-type': 'application/json'},
body: JSON.stringify({
'message': false
})
})
.then((response) => response.json())
.then((data) => {
console.log('Success:', data);
})
.catch((error) => {
console.error('Error:', error);
});
And in backend it's better to return json response for api calls instead of html. To do it you can change your code:
from flask import jsonify
...
if post_data['message'] == False:
print('false')
print('not detected')
return jsonify(error='not detected')
Ref: https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API/Using_Fetch#uploading_json_data
A:
There's something else wrong. I copied your code exactly and ran
curl.exe --header "Content-Type: application/json" -d '{\"message\":false}' http://localhost:5000
and got
<p>not detected</p>
|
flask template not rendering as expected
|
Expected output is 'not detected' but I get 'no error' on get and post. Why?
index.html
{% if error %}
<p>{{ error }}</p>
{% else %}
<p>no error</p>
{% endif %}
main.py
@app.route('/', methods=['GET', 'POST'])
def index():
if request.method == 'GET':
print('get')
return render_template('index.html')
elif request.method == 'POST':
print('post')
post_data = request.get_json(force=True)
if post_data['message'] == False:
print('false')
print('not detected')
return render_template('index.html', error='not detected')
edit:
not sure if this is what's causing the errors.
script.js
window.onload = (event) => {
if (!window.ethereum) {
console.log('error - not detected');
fetch(`${window.origin}/`, {
method: 'POST',
headers: {'content-type': 'application/json'},
body: JSON.stringify({
'message': false
})
});
} else {
console.log('detected');
}
};
|
[
"The problem is with your js code. As I can see, you make fetch call providing parameters and not providing callback for response processing.\nIt should be:\nfetch(`${window.origin}/`, {\n method: 'POST',\n headers: {'content-type': 'application/json'},\n body: JSON.stringify({\n 'message': false\n })\n})\n.then((response) => response.json())\n.then((data) => {\n console.log('Success:', data);\n})\n.catch((error) => {\n console.error('Error:', error);\n});\n\nAnd in backend it's better to return json response for api calls instead of html. To do it you can change your code:\nfrom flask import jsonify\n\n...\n if post_data['message'] == False:\n print('false')\n print('not detected')\n return jsonify(error='not detected')\n\nRef: https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API/Using_Fetch#uploading_json_data\n",
"There's something else wrong. I copied your code exactly and ran\ncurl.exe --header \"Content-Type: application/json\" -d '{\\\"message\\\":false}' http://localhost:5000\n\nand got\n <p>not detected</p>\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"flask",
"post",
"python"
] |
stackoverflow_0074651348_flask_post_python.txt
|
Q:
SFMC AMPscript Understanding Controlling Expression Evaluation unclear
The below expression is not making sense to me. I'm having a hard time understand why it will result in "free shipping". Can someone elaborate on the explanation?
%%[
var @statusTier, @amount, @freeShipping
set @statusTier = "Bronze"
set @amount = 300
if @statusTier == "Bronze" or @statusTier == "Silver" and @amount > 500 then
set @freeShipping = true
endif
]%%
<p>You %%=Iif(@freeShipping == true, "qualify","do not qualify")=%% for free shipping.</p>
Output:
The join operators in the above if statement will be evaluated as a single expression and will produce the following result:
<p>You qualify for free shipping.</p>
From my understanding the set amount of 300 is not > 500 therefore this should not have been a true statement and should output "You do not qualify for free shipping.". I'm a missing something here? Please help, I'm a newbie to AMPscript with little knowledge to JavaScript.
Original THE AMPSCRIPT GUIDE post: https://ampscript.guide/controlling-expression-evaluation/
Thank you for you input in advance!
A:
The example in the ampscript guide post is demonstrating order of operations.
When there are no parentheses around the OR condition, the code will evaluate as true because it's looking at the entire condition as a single expression.
When you add parenthesis:
if (@statusTier == "Bronze" or @statusTier == "Silver") and @amount > 500
The OR condition within () will be evaluated first and then the AND condition will be evaluated. This will result as false because @amount is in fact less than 500.
|
SFMC AMPscript Understanding Controlling Expression Evaluation unclear
|
The below expression is not making sense to me. I'm having a hard time understand why it will result in "free shipping". Can someone elaborate on the explanation?
%%[
var @statusTier, @amount, @freeShipping
set @statusTier = "Bronze"
set @amount = 300
if @statusTier == "Bronze" or @statusTier == "Silver" and @amount > 500 then
set @freeShipping = true
endif
]%%
<p>You %%=Iif(@freeShipping == true, "qualify","do not qualify")=%% for free shipping.</p>
Output:
The join operators in the above if statement will be evaluated as a single expression and will produce the following result:
<p>You qualify for free shipping.</p>
From my understanding the set amount of 300 is not > 500 therefore this should not have been a true statement and should output "You do not qualify for free shipping.". I'm a missing something here? Please help, I'm a newbie to AMPscript with little knowledge to JavaScript.
Original THE AMPSCRIPT GUIDE post: https://ampscript.guide/controlling-expression-evaluation/
Thank you for you input in advance!
|
[
"The example in the ampscript guide post is demonstrating order of operations.\nWhen there are no parentheses around the OR condition, the code will evaluate as true because it's looking at the entire condition as a single expression.\nWhen you add parenthesis:\nif (@statusTier == \"Bronze\" or @statusTier == \"Silver\") and @amount > 500\n\nThe OR condition within () will be evaluated first and then the AND condition will be evaluated. This will result as false because @amount is in fact less than 500.\n"
] |
[
0
] |
[] |
[] |
[
"ampscript",
"controlling",
"expression_evaluation",
"javascript",
"salesforce_marketing_cloud"
] |
stackoverflow_0074661845_ampscript_controlling_expression_evaluation_javascript_salesforce_marketing_cloud.txt
|
Q:
What is the easiest way to connect C# backend to an already created ASP.net react project?
I have some C# code that does CRUD operations and simple logic. And I have a ASP.net React project with a basic page layout. What is the easiest way to connect the two? I just want to present the data that my C# code retrieves from the database and do simple insertions using my C# code and the react frontend.
I have attempted look online for the best way to do this with little to no luck. If anyone could give me some advice or point me in the right direction, it would be much appreciated.
A:
The easiest way to connect your C# code and ASP.NET React project is to use a web API. A web API is a set of programming instructions and standards for accessing a web server, and can be used to create a bridge between your C# code and your React frontend.
Here is an outline of the steps you can follow to connect your C# code and ASP.NET React project using a web API:
Create a new ASP.NET Web API project in your solution. This will be the project that contains your web API, which will act as the bridge between your C# code and React frontend.
Add your C# code to the web API project. This can include your database access code, CRUD operations, and any other logic you want to use in your application.
Create web API controllers for each of the operations you want to expose to your React frontend. For example, you can create a CustomersController for CRUD operations on customer data, and a LogicController for any other logic you want to use in your application.
Add routes to your web API controllers. This will allow your React frontend to access the different operations provided by your web API.
In your React frontend, use the fetch or axios library to make HTTP requests to your web API. This will allow your React frontend to retrieve data from the database and perform CRUD operations using your C# code.
Use the data retrieved from the web API in your React components to display the information on your page.
This is a basic outline of how you can connect your C# code and ASP.NET React project using a web API. There are many other ways you can approach this problem, and the best solution will depend on your specific needs and requirements. You may need to do some additional research and experimentation to find the best solution for your project.
|
What is the easiest way to connect C# backend to an already created ASP.net react project?
|
I have some C# code that does CRUD operations and simple logic. And I have a ASP.net React project with a basic page layout. What is the easiest way to connect the two? I just want to present the data that my C# code retrieves from the database and do simple insertions using my C# code and the react frontend.
I have attempted look online for the best way to do this with little to no luck. If anyone could give me some advice or point me in the right direction, it would be much appreciated.
|
[
"The easiest way to connect your C# code and ASP.NET React project is to use a web API. A web API is a set of programming instructions and standards for accessing a web server, and can be used to create a bridge between your C# code and your React frontend.\nHere is an outline of the steps you can follow to connect your C# code and ASP.NET React project using a web API:\nCreate a new ASP.NET Web API project in your solution. This will be the project that contains your web API, which will act as the bridge between your C# code and React frontend.\nAdd your C# code to the web API project. This can include your database access code, CRUD operations, and any other logic you want to use in your application.\nCreate web API controllers for each of the operations you want to expose to your React frontend. For example, you can create a CustomersController for CRUD operations on customer data, and a LogicController for any other logic you want to use in your application.\nAdd routes to your web API controllers. This will allow your React frontend to access the different operations provided by your web API.\nIn your React frontend, use the fetch or axios library to make HTTP requests to your web API. This will allow your React frontend to retrieve data from the database and perform CRUD operations using your C# code.\nUse the data retrieved from the web API in your React components to display the information on your page.\nThis is a basic outline of how you can connect your C# code and ASP.NET React project using a web API. There are many other ways you can approach this problem, and the best solution will depend on your specific needs and requirements. You may need to do some additional research and experimentation to find the best solution for your project.\n"
] |
[
0
] |
[] |
[] |
[
"asp.net",
"c#"
] |
stackoverflow_0074662074_asp.net_c#.txt
|
Q:
when i import react i got an error about react is declared but its value is never read
import React
'React' is declared but its value is never read.ts(6133)
'React' is defined but never used.
import React from "react";
import reactDom from "react-dom";
A:
go to to webpack.config.js file located in ....\node_modules\react-scripts\config\webpack.config.js . and add to it :
plugins: [
new webpack.ProvidePlugin({
React: 'react'
})
]
this will Automatically load react instead of having to import it everywhere . you can check it here
|
when i import react i got an error about react is declared but its value is never read
|
import React
'React' is declared but its value is never read.ts(6133)
'React' is defined but never used.
import React from "react";
import reactDom from "react-dom";
|
[
"go to to webpack.config.js file located in ....\\node_modules\\react-scripts\\config\\webpack.config.js . and add to it :\nplugins: [\n new webpack.ProvidePlugin({\n React: 'react'\n })\n ]\n\nthis will Automatically load react instead of having to import it everywhere . you can check it here\n"
] |
[
0
] |
[] |
[] |
[
"reactjs"
] |
stackoverflow_0074661797_reactjs.txt
|
Q:
Replace value at i in Dictionary
I am trying to loop through a dictionary and if it meets a requirement, the requirements being distinction >=70, merit>=60, pass>=50, and fail less than 50 then the value that is currently being passed through the loop will be replaced by the correct classification.
For example, the first value being passed through is mark_1 which is 20 so at 20 in the dictionary I am looking to replace the 20 with "fail"
module_1="Maths"
module_2="English"
module_3="Science"
module_4="Business"
module_5="PE"
mark_1 =20
mark_2=30
mark_3 =40
mark_4=50
mark_5=60
module_marks = {module_1:int(mark_1),
module_2: int(mark_2),
module_3: int(mark_3),
module_4: int(mark_4),
module_5:int(mark_5)}
marks= classifygrade.classify_grade(module_marks)
And in my other class it defines the method to try and accomplish this.
def classify_grade(module_marks):
for i in module_marks.values():
if i>=70:
module_marks[i].update("distinction")
elif i>=60:
module_marks[i].update("merit")
elif i>=50:
module_marks[i].update("pass")
else:
module_marks[i].update("fail")
A:
The problem is that you're trying to access a field in a dictionary by its value. You are also trying to return a new dictionary, but you are changing the original dictionary. You should create a separate dictionary and use dict.items() instead. Like this:
def classifyMarks(marks):
result = {}
for (subject, grade) in marks.items():
if grade >= 70:
result[subject] = "Distinction"
elif grade >= 60:
result[subject] = "Merit"
elif grade >= 50:
result[subject] = "Pass"
else:
result[subject] = "Fail"
return result
marks = {
"Maths": 20,
"English": 30,
"Science": 40,
"Business": 50,
"PE": 60
}
marks = classifyMarks(marks)
print(marks)
|
Replace value at i in Dictionary
|
I am trying to loop through a dictionary and if it meets a requirement, the requirements being distinction >=70, merit>=60, pass>=50, and fail less than 50 then the value that is currently being passed through the loop will be replaced by the correct classification.
For example, the first value being passed through is mark_1 which is 20 so at 20 in the dictionary I am looking to replace the 20 with "fail"
module_1="Maths"
module_2="English"
module_3="Science"
module_4="Business"
module_5="PE"
mark_1 =20
mark_2=30
mark_3 =40
mark_4=50
mark_5=60
module_marks = {module_1:int(mark_1),
module_2: int(mark_2),
module_3: int(mark_3),
module_4: int(mark_4),
module_5:int(mark_5)}
marks= classifygrade.classify_grade(module_marks)
And in my other class it defines the method to try and accomplish this.
def classify_grade(module_marks):
for i in module_marks.values():
if i>=70:
module_marks[i].update("distinction")
elif i>=60:
module_marks[i].update("merit")
elif i>=50:
module_marks[i].update("pass")
else:
module_marks[i].update("fail")
|
[
"The problem is that you're trying to access a field in a dictionary by its value. You are also trying to return a new dictionary, but you are changing the original dictionary. You should create a separate dictionary and use dict.items() instead. Like this:\ndef classifyMarks(marks):\n result = {}\n for (subject, grade) in marks.items():\n if grade >= 70:\n result[subject] = \"Distinction\"\n elif grade >= 60:\n result[subject] = \"Merit\"\n elif grade >= 50:\n result[subject] = \"Pass\"\n else:\n result[subject] = \"Fail\"\n return result\n\n\nmarks = {\n \"Maths\": 20,\n \"English\": 30,\n \"Science\": 40,\n \"Business\": 50,\n \"PE\": 60\n}\n\nmarks = classifyMarks(marks)\nprint(marks)\n\n"
] |
[
1
] |
[] |
[] |
[
"python",
"python_3.x"
] |
stackoverflow_0074662028_python_python_3.x.txt
|