content
stringlengths 86
88.9k
| title
stringlengths 0
150
| question
stringlengths 1
35.8k
| answers
list | answers_scores
list | non_answers
list | non_answers_scores
list | tags
list | name
stringlengths 30
130
|
---|---|---|---|---|---|---|---|---|
Q:
How to convert missing value in csv file to database null value using ADF copy activity?
I have a pipeline in Azure Data Factory to take in incoming CSV files and save them to SQL server database, and I use a copy activity to take the wrangled CSV file and call a stored procedure to save it the data base table.
However, it is not unusual that some records in the CSV file have missing value at some columns. Such missing value will fail copy activity and below is the error message:
ErrorCode=InvalidParameter,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=The value of the property '' is invalid: 'Cannot set Column 'col 1' to be null. Please use DBNull instead.'
The copy activity runs correctly when there is no missing value in the incoming data.
Below is the snippet of the stored procedure that fails the execution when encounter missing value(s).
INSERT INTO target_table(
[Id],
[col 1],
[col 2],
[col 3]
)
SELECT
[source Id],
[column 1],
[column 2],
[column 3]
FROM source_table
My question is what I can do to convert the missing value in CSV file into a null value that SQL server understand.
I orignally thought the problem is at the database side, so I created a test table in SQL Server and put some test data intentionally with missing values into a test table, then I run the stored procedure. These records with missing value get saved to the target table correctly. So I realized that the problem lies when the copy activity takes in the CSV file and pass it to the stored procedure, and the missing values didn't get translated well into a null value that SQL Server can understand.
A:
You can use dataflow activity to set the value as NULL. Below is the approach
In Dataflow, Source data is taken as in below image.
Derived column transformation is taken and expression is given as iifNull(id,toString(null()))
Result
A:
Have you tried this option in Copy activity ?
This should do that trick .
A:
After various attempts, here is my solution to the problem. It is not ideal but it works. The solution is that I created a permament staging table in SQL Server and use the copy actity to transfer the CSV data to this staging table. The trick is to use the insert option in the copy activity (see the picture) instead of using a stored procedure which is what I tried to do previously. It feels like there is some internal mechanism behind the scene between copy activity and SQL server that handles the missing value. Once the data is saved in the staging table in SQL Server, I can easily do whatever I want in the database world and missing value is no longer an issue.
|
How to convert missing value in csv file to database null value using ADF copy activity?
|
I have a pipeline in Azure Data Factory to take in incoming CSV files and save them to SQL server database, and I use a copy activity to take the wrangled CSV file and call a stored procedure to save it the data base table.
However, it is not unusual that some records in the CSV file have missing value at some columns. Such missing value will fail copy activity and below is the error message:
ErrorCode=InvalidParameter,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=The value of the property '' is invalid: 'Cannot set Column 'col 1' to be null. Please use DBNull instead.'
The copy activity runs correctly when there is no missing value in the incoming data.
Below is the snippet of the stored procedure that fails the execution when encounter missing value(s).
INSERT INTO target_table(
[Id],
[col 1],
[col 2],
[col 3]
)
SELECT
[source Id],
[column 1],
[column 2],
[column 3]
FROM source_table
My question is what I can do to convert the missing value in CSV file into a null value that SQL server understand.
I orignally thought the problem is at the database side, so I created a test table in SQL Server and put some test data intentionally with missing values into a test table, then I run the stored procedure. These records with missing value get saved to the target table correctly. So I realized that the problem lies when the copy activity takes in the CSV file and pass it to the stored procedure, and the missing values didn't get translated well into a null value that SQL Server can understand.
|
[
"You can use dataflow activity to set the value as NULL. Below is the approach\n\nIn Dataflow, Source data is taken as in below image.\n\n\n\nDerived column transformation is taken and expression is given as iifNull(id,toString(null()))\n\n\n\nResult\n\n\n",
"Have you tried this option in Copy activity ?\n\nThis should do that trick .\n",
"After various attempts, here is my solution to the problem. It is not ideal but it works. The solution is that I created a permament staging table in SQL Server and use the copy actity to transfer the CSV data to this staging table. The trick is to use the insert option in the copy activity (see the picture) instead of using a stored procedure which is what I tried to do previously. It feels like there is some internal mechanism behind the scene between copy activity and SQL server that handles the missing value. Once the data is saved in the staging table in SQL Server, I can easily do whatever I want in the database world and missing value is no longer an issue.\n"
] |
[
0,
0,
0
] |
[] |
[] |
[
"azure_data_factory_2",
"copy_activity",
"sql_server"
] |
stackoverflow_0074618111_azure_data_factory_2_copy_activity_sql_server.txt
|
Q:
Why doesn't Element.attrib include namespace definitions?
I'd like to create a XML namespace mapping (e.g., to use in findall calls as in the Python documentation of ElementTree). Given the definitions seem to exist as attributes of the xbrl root element, I'd have thought I could just examine the attrib attribute of the root element within my ElementTree. However, the following code
from io import StringIO
import xml.etree.ElementTree as ET
TEST = '''<?xml version="1.0" encoding="utf-8"?>
<xbrl
xml:lang="en-US"
xmlns="http://www.xbrl.org/2003/instance"
xmlns:country="http://xbrl.sec.gov/country/2021"
xmlns:dei="http://xbrl.sec.gov/dei/2021q4"
xmlns:iso4217="http://www.xbrl.org/2003/iso4217"
xmlns:link="http://www.xbrl.org/2003/linkbase"
xmlns:nvda="http://www.nvidia.com/20220130"
xmlns:srt="http://fasb.org/srt/2021-01-31"
xmlns:stpr="http://xbrl.sec.gov/stpr/2021"
xmlns:us-gaap="http://fasb.org/us-gaap/2021-01-31"
xmlns:xbrldi="http://xbrl.org/2006/xbrldi"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
</xbrl>'''
xbrl = ET.parse(StringIO(TEST))
print(xbrl.getroot().attrib)
produces the following output:
{'{http://www.w3.org/XML/1998/namespace}lang': 'en-US'}
Why aren't any of the namespace attributes showing up in root.attrib? I'd at least expect xlmns to be in the dictionary given it has no prefix.
What have I tried?
The following code seems to work to generate the namespace mapping:
print({prefix: uri for key, (prefix, uri) in ET.iterparse(StringIO(TEST), events=['start-ns'])})
output:
{'': 'http://www.xbrl.org/2003/instance',
'country': 'http://xbrl.sec.gov/country/2021',
'dei': 'http://xbrl.sec.gov/dei/2021q4',
'iso4217': 'http://www.xbrl.org/2003/iso4217',
'link': 'http://www.xbrl.org/2003/linkbase',
'nvda': 'http://www.nvidia.com/20220130',
'srt': 'http://fasb.org/srt/2021-01-31',
'stpr': 'http://xbrl.sec.gov/stpr/2021',
'us-gaap': 'http://fasb.org/us-gaap/2021-01-31',
'xbrldi': 'http://xbrl.org/2006/xbrldi',
'xlink': 'http://www.w3.org/1999/xlink',
'xsi': 'http://www.w3.org/2001/XMLSchema-instance'}
But yikes is it gross to have to parse the file twice.
A:
As for the answer to your specific question, why the attrib list doesn't contain the namespace prefix decls, sorry for the unquenching answer: because they're not attributes.
http://www.w3.org/XML/1998/namespace is a special schema that doesn't act like the other schemas in your userspace. In that representation, xmlns:prefix="uri" is an attribute. In all other subordinate (by parsing sequence) schemas, xmlns:prefix="uri" is a special thing, a namespace prefix declaration, which is different than an attribute on a node or element. I don't have a reference for this but it holds true perfectly in at least a half dozen (correct) implementations of XML parsers that I've used, including those from IBM, Microsoft and Oracle.
As for the ugliness of reparsing the file, I feel your pain but it's necessary. As tdelaney so well pointed out, you may not assume that all of your namespace decls or prefixes must be on your root element.
Be prepared for the possibility of the same prefix being redefined with a different namespace on every node in your document. This may hold true and the library must correctly work with it, even if it is never the case your document (or worse, if it's never been the case so far).
Consider if perhaps you are shoehorning some text processing to parse or query XML when there may be a better solution, like XPath or XQuery. There are some good recent changes to and Python wrappers for Saxon, even though their pricing model has changed.
|
Why doesn't Element.attrib include namespace definitions?
|
I'd like to create a XML namespace mapping (e.g., to use in findall calls as in the Python documentation of ElementTree). Given the definitions seem to exist as attributes of the xbrl root element, I'd have thought I could just examine the attrib attribute of the root element within my ElementTree. However, the following code
from io import StringIO
import xml.etree.ElementTree as ET
TEST = '''<?xml version="1.0" encoding="utf-8"?>
<xbrl
xml:lang="en-US"
xmlns="http://www.xbrl.org/2003/instance"
xmlns:country="http://xbrl.sec.gov/country/2021"
xmlns:dei="http://xbrl.sec.gov/dei/2021q4"
xmlns:iso4217="http://www.xbrl.org/2003/iso4217"
xmlns:link="http://www.xbrl.org/2003/linkbase"
xmlns:nvda="http://www.nvidia.com/20220130"
xmlns:srt="http://fasb.org/srt/2021-01-31"
xmlns:stpr="http://xbrl.sec.gov/stpr/2021"
xmlns:us-gaap="http://fasb.org/us-gaap/2021-01-31"
xmlns:xbrldi="http://xbrl.org/2006/xbrldi"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
</xbrl>'''
xbrl = ET.parse(StringIO(TEST))
print(xbrl.getroot().attrib)
produces the following output:
{'{http://www.w3.org/XML/1998/namespace}lang': 'en-US'}
Why aren't any of the namespace attributes showing up in root.attrib? I'd at least expect xlmns to be in the dictionary given it has no prefix.
What have I tried?
The following code seems to work to generate the namespace mapping:
print({prefix: uri for key, (prefix, uri) in ET.iterparse(StringIO(TEST), events=['start-ns'])})
output:
{'': 'http://www.xbrl.org/2003/instance',
'country': 'http://xbrl.sec.gov/country/2021',
'dei': 'http://xbrl.sec.gov/dei/2021q4',
'iso4217': 'http://www.xbrl.org/2003/iso4217',
'link': 'http://www.xbrl.org/2003/linkbase',
'nvda': 'http://www.nvidia.com/20220130',
'srt': 'http://fasb.org/srt/2021-01-31',
'stpr': 'http://xbrl.sec.gov/stpr/2021',
'us-gaap': 'http://fasb.org/us-gaap/2021-01-31',
'xbrldi': 'http://xbrl.org/2006/xbrldi',
'xlink': 'http://www.w3.org/1999/xlink',
'xsi': 'http://www.w3.org/2001/XMLSchema-instance'}
But yikes is it gross to have to parse the file twice.
|
[
"As for the answer to your specific question, why the attrib list doesn't contain the namespace prefix decls, sorry for the unquenching answer: because they're not attributes.\nhttp://www.w3.org/XML/1998/namespace is a special schema that doesn't act like the other schemas in your userspace. In that representation, xmlns:prefix=\"uri\" is an attribute. In all other subordinate (by parsing sequence) schemas, xmlns:prefix=\"uri\" is a special thing, a namespace prefix declaration, which is different than an attribute on a node or element. I don't have a reference for this but it holds true perfectly in at least a half dozen (correct) implementations of XML parsers that I've used, including those from IBM, Microsoft and Oracle.\nAs for the ugliness of reparsing the file, I feel your pain but it's necessary. As tdelaney so well pointed out, you may not assume that all of your namespace decls or prefixes must be on your root element.\nBe prepared for the possibility of the same prefix being redefined with a different namespace on every node in your document. This may hold true and the library must correctly work with it, even if it is never the case your document (or worse, if it's never been the case so far).\nConsider if perhaps you are shoehorning some text processing to parse or query XML when there may be a better solution, like XPath or XQuery. There are some good recent changes to and Python wrappers for Saxon, even though their pricing model has changed.\n"
] |
[
1
] |
[] |
[] |
[
"elementtree",
"python",
"xml",
"xml_namespaces"
] |
stackoverflow_0074337020_elementtree_python_xml_xml_namespaces.txt
|
Q:
Use Reveal.js as npm in Next.js
I'm working in a project where I have to use Reveal.js in Next.js.
But I don't understand how to do to display the slides. When I try to display, anything appears.
I tried do display some slides but nothing appeared, here's my code:
My component
//Slide.js
import Reveal from 'reveal.js';
import Markdown from 'reveal.js/plugin/markdown/markdown.esm.js';
import '/node_modules/reveal.js/dist/reveal.css';
import '/node_modules/reveal.js/dist/theme/black.css';
Reveal.initialize({
controls:true,
width:1000,
height:1000,
margin: 0.1,
display:true,
plugins: [ Markdown ]
});
export default function Slide(){
return(
<>
<div class="reveal">
<div class="slides">
<section>Slide 1</section>
<section data-state="make-it-pop">
<section>Vertical Slide 1</section>
<section>Vertical Slide 2</section>
</section>
<section data-markdown>
<textarea data-template>
## Slide 1
A paragraph with some text and a [link](http://hakim.se).
---
## Slide 2
---
## Slide 3
</textarea>
</section>
</div>
</div>
</>
)
}
And here the page where it should be displayed:
//index.js
import dynamic from 'next/dynamic';
const Slide = dynamic(() => import('./slide'), { ssr: false, })
export default function Home() {
return (
<div>
<Slide></Slide>
</div>
)
}
A:
You are importing the reveal.js package, but you are not using it anywhere in your Slide component. You need to use the Reveal.initialize() method inside the Slide component to initialize the presentation. This is done inside the
useEffect hook.
export default function Slide(){
useEffect(() => {
Reveal.initialize({
controls:true,
width:1000,
height:1000,
margin: 0.1,
display:true,
plugins: [ Markdownu ]
});
}, []);
return(
…
|
Use Reveal.js as npm in Next.js
|
I'm working in a project where I have to use Reveal.js in Next.js.
But I don't understand how to do to display the slides. When I try to display, anything appears.
I tried do display some slides but nothing appeared, here's my code:
My component
//Slide.js
import Reveal from 'reveal.js';
import Markdown from 'reveal.js/plugin/markdown/markdown.esm.js';
import '/node_modules/reveal.js/dist/reveal.css';
import '/node_modules/reveal.js/dist/theme/black.css';
Reveal.initialize({
controls:true,
width:1000,
height:1000,
margin: 0.1,
display:true,
plugins: [ Markdown ]
});
export default function Slide(){
return(
<>
<div class="reveal">
<div class="slides">
<section>Slide 1</section>
<section data-state="make-it-pop">
<section>Vertical Slide 1</section>
<section>Vertical Slide 2</section>
</section>
<section data-markdown>
<textarea data-template>
## Slide 1
A paragraph with some text and a [link](http://hakim.se).
---
## Slide 2
---
## Slide 3
</textarea>
</section>
</div>
</div>
</>
)
}
And here the page where it should be displayed:
//index.js
import dynamic from 'next/dynamic';
const Slide = dynamic(() => import('./slide'), { ssr: false, })
export default function Home() {
return (
<div>
<Slide></Slide>
</div>
)
}
|
[
"You are importing the reveal.js package, but you are not using it anywhere in your Slide component. You need to use the Reveal.initialize() method inside the Slide component to initialize the presentation. This is done inside the\nuseEffect hook.\nexport default function Slide(){\n useEffect(() => {\n Reveal.initialize({\n controls:true,\n width:1000,\n height:1000,\n margin: 0.1,\n display:true,\n plugins: [ Markdownu ]\n });\n }, []);\n\n return(\n …\n\n"
] |
[
0
] |
[] |
[] |
[
"javascript",
"next.js",
"npm",
"reactjs",
"reveal.js"
] |
stackoverflow_0074658049_javascript_next.js_npm_reactjs_reveal.js.txt
|
Q:
osCommerce 2.3 - Unable to determine the page link
I have a problem in osCommerce 2.3
PHP/5.6.31
I have tried to setup on my localhost but it will return an error message.
So how can I fix this problem
Is there any problem in the configuration?
Error!
Unable to determine the page link!
Function used:
tep_href_link('', '', 'NONSSL')
A:
If you look in catalog/includes/functions/html_output.php, you'll see the function definition for tep_href_link. It's
function tep_href_link($page = '', $parameters = '', $connection = 'NONSSL', $add_session_id = true, $search_engine_safe = true) {
You have provided a blank first parameter. If you look at some examples in the catalog directory, you'll see they provide a first parameter, generally drawn from a define in includes/filenames.php.
A:
i have faced same issue and on debug found this issue comes due to tep_href_link(basename($PHP_SELF) $PHP_SELF is not working properly return blank and got fixed just placing
$PHP_SELF=$_SERVER['PHP_SELF'];
define('CONFIGURE_STATUS_COMPLETED', 1);
at the bottom of includes/configure.php and it worked.
|
osCommerce 2.3 - Unable to determine the page link
|
I have a problem in osCommerce 2.3
PHP/5.6.31
I have tried to setup on my localhost but it will return an error message.
So how can I fix this problem
Is there any problem in the configuration?
Error!
Unable to determine the page link!
Function used:
tep_href_link('', '', 'NONSSL')
|
[
"If you look in catalog/includes/functions/html_output.php, you'll see the function definition for tep_href_link. It's \nfunction tep_href_link($page = '', $parameters = '', $connection = 'NONSSL', $add_session_id = true, $search_engine_safe = true) {\nYou have provided a blank first parameter. If you look at some examples in the catalog directory, you'll see they provide a first parameter, generally drawn from a define in includes/filenames.php. \n",
"i have faced same issue and on debug found this issue comes due to tep_href_link(basename($PHP_SELF) $PHP_SELF is not working properly return blank and got fixed just placing\n$PHP_SELF=$_SERVER['PHP_SELF'];\ndefine('CONFIGURE_STATUS_COMPLETED', 1);\n\nat the bottom of includes/configure.php and it worked.\n"
] |
[
1,
0
] |
[] |
[] |
[
"oscommerce",
"php"
] |
stackoverflow_0049827975_oscommerce_php.txt
|
Q:
swiperjs sets wrong height for the first picture
Since I added autoheight for my swiperjs the div height for the first picture is false,
when I swipe and swipe back the picture gets the correct height. What can I add css or js wise to prevent this?
The Prefixed Height seems like the height of the direction arrows.
my js:
var W = {
init: !0,
direction: "horizontal",
touchEventsTarget: "wrapper",
observer: 1,
autoHeight: 1,
observeParents: 1,
initialSlide: 0,
speed: 300,
cssMode: !1,
updateOnWindowResize: !0,
resizeObserver: !0,
nested: !1,
createElements: !1,
enabled: !0,
focusableElements: "input, select, option, textarea, button, video, label",
width: null,
height: null,
preventInteractionOnTransition: !1,
userAgent: null,
url: null,
edgeSwipeDetection: !1,
edgeSwipeThreshold: 20,
setWrapperSize: !1,
virtualTranslate: !1,
effect: "slide",
breakpoints: void 0,
breakpointsBase: "window",
spaceBetween: 0,
slidesPerView: 1,
slidesPerGroup: 1,
slidesPerGroupSkip: 0,
slidesPerGroupAuto: !1,
centeredSlides: !1,
centeredSlidesBounds: !1,
slidesOffsetBefore: 0,
slidesOffsetAfter: 0,
normalizeSlideIndex: !0,
centerInsufficientSlides: !1,
watchOverflow: !0,
roundLengths: !1,
touchRatio: 1,
touchAngle: 45,
simulateTouch: !0,
shortSwipes: !0,
longSwipes: !0,
longSwipesRatio: 0.5,
longSwipesMs: 300,
followFinger: !0,
allowTouchMove: !0,
threshold: 0,
touchMoveStopPropagation: !1,
touchStartPreventDefault: !0,
touchStartForcePreventDefault: !1,
touchReleaseOnEdges: !1,
uniqueNavElements: !0,
resistance: !0,
resistanceRatio: 0.85,
watchSlidesProgress: !1,
grabCursor: !1,
preventClicks: !0,
preventClicksPropagation: !0,
slideToClickedSlide: !1,
preloadImages: 1,
updateOnImagesReady: 1,
loop: !1,
loopAdditionalSlides: 0,
loopedSlides: null,
loopFillGroupWithBlank: !1,
loopPreventsSlide: !0,
rewind: !1,
allowSlidePrev: !0,
allowSlideNext: !0,
swipeHandler: null,
noSwiping: !0,
noSwipingClass: "swiper-no-swiping",
noSwipingSelector: null,
passiveListeners: !0,
maxBackfaceHiddenSlides: 10,
containerModifierClass: "swiper-",
slideClass: "swiper-slide",
slideBlankClass: "swiper-slide-invisible-blank",
slideActiveClass: "swiper-slide-active",
slideDuplicateActiveClass: "swiper-slide-duplicate-active",
slideVisibleClass: "swiper-slide-visible",
slideDuplicateClass: "swiper-slide-duplicate",
slideNextClass: "swiper-slide-next",
slideDuplicateNextClass: "swiper-slide-duplicate-next",
slidePrevClass: "swiper-slide-prev",
slideDuplicatePrevClass: "swiper-slide-duplicate-prev",
wrapperClass: "swiper-wrapper",
runCallbacksOnInit: !0,
_emitClasses: !1,
};
I already tried adding the following:
preloadImages:1,
updateOnImagesReady:1,
observer:1 and
observeParents:1.
removing lazy loading
A:
To prevent the div height from being incorrect on the first slide, you can try setting the autoHeight option to false in your swiper configuration. This will prevent the swiper from trying to automatically adjust the height of the slides, which may be causing the issue you are seeing.
var W = {
//...
autoHeight: false,
//...
};
Alternatively, you could try setting the height of the slides manually using CSS. You can do this by applying a specific height to the .swiper-slide class in your CSS file. For example:
.swiper-slide {
height: 400px;
}
This will set the height of all slides to 400 pixels. You can adjust this value as needed to fit the size of your images.
Hope that helps!
|
swiperjs sets wrong height for the first picture
|
Since I added autoheight for my swiperjs the div height for the first picture is false,
when I swipe and swipe back the picture gets the correct height. What can I add css or js wise to prevent this?
The Prefixed Height seems like the height of the direction arrows.
my js:
var W = {
init: !0,
direction: "horizontal",
touchEventsTarget: "wrapper",
observer: 1,
autoHeight: 1,
observeParents: 1,
initialSlide: 0,
speed: 300,
cssMode: !1,
updateOnWindowResize: !0,
resizeObserver: !0,
nested: !1,
createElements: !1,
enabled: !0,
focusableElements: "input, select, option, textarea, button, video, label",
width: null,
height: null,
preventInteractionOnTransition: !1,
userAgent: null,
url: null,
edgeSwipeDetection: !1,
edgeSwipeThreshold: 20,
setWrapperSize: !1,
virtualTranslate: !1,
effect: "slide",
breakpoints: void 0,
breakpointsBase: "window",
spaceBetween: 0,
slidesPerView: 1,
slidesPerGroup: 1,
slidesPerGroupSkip: 0,
slidesPerGroupAuto: !1,
centeredSlides: !1,
centeredSlidesBounds: !1,
slidesOffsetBefore: 0,
slidesOffsetAfter: 0,
normalizeSlideIndex: !0,
centerInsufficientSlides: !1,
watchOverflow: !0,
roundLengths: !1,
touchRatio: 1,
touchAngle: 45,
simulateTouch: !0,
shortSwipes: !0,
longSwipes: !0,
longSwipesRatio: 0.5,
longSwipesMs: 300,
followFinger: !0,
allowTouchMove: !0,
threshold: 0,
touchMoveStopPropagation: !1,
touchStartPreventDefault: !0,
touchStartForcePreventDefault: !1,
touchReleaseOnEdges: !1,
uniqueNavElements: !0,
resistance: !0,
resistanceRatio: 0.85,
watchSlidesProgress: !1,
grabCursor: !1,
preventClicks: !0,
preventClicksPropagation: !0,
slideToClickedSlide: !1,
preloadImages: 1,
updateOnImagesReady: 1,
loop: !1,
loopAdditionalSlides: 0,
loopedSlides: null,
loopFillGroupWithBlank: !1,
loopPreventsSlide: !0,
rewind: !1,
allowSlidePrev: !0,
allowSlideNext: !0,
swipeHandler: null,
noSwiping: !0,
noSwipingClass: "swiper-no-swiping",
noSwipingSelector: null,
passiveListeners: !0,
maxBackfaceHiddenSlides: 10,
containerModifierClass: "swiper-",
slideClass: "swiper-slide",
slideBlankClass: "swiper-slide-invisible-blank",
slideActiveClass: "swiper-slide-active",
slideDuplicateActiveClass: "swiper-slide-duplicate-active",
slideVisibleClass: "swiper-slide-visible",
slideDuplicateClass: "swiper-slide-duplicate",
slideNextClass: "swiper-slide-next",
slideDuplicateNextClass: "swiper-slide-duplicate-next",
slidePrevClass: "swiper-slide-prev",
slideDuplicatePrevClass: "swiper-slide-duplicate-prev",
wrapperClass: "swiper-wrapper",
runCallbacksOnInit: !0,
_emitClasses: !1,
};
I already tried adding the following:
preloadImages:1,
updateOnImagesReady:1,
observer:1 and
observeParents:1.
removing lazy loading
|
[
"To prevent the div height from being incorrect on the first slide, you can try setting the autoHeight option to false in your swiper configuration. This will prevent the swiper from trying to automatically adjust the height of the slides, which may be causing the issue you are seeing.\nvar W = {\n //...\n autoHeight: false,\n //...\n};\n\nAlternatively, you could try setting the height of the slides manually using CSS. You can do this by applying a specific height to the .swiper-slide class in your CSS file. For example:\n.swiper-slide {\n height: 400px;\n}\n\nThis will set the height of all slides to 400 pixels. You can adjust this value as needed to fit the size of your images.\nHope that helps!\n"
] |
[
0
] |
[] |
[] |
[
"css",
"html",
"javascript",
"swiper.js"
] |
stackoverflow_0074654522_css_html_javascript_swiper.js.txt
|
Q:
Get records for the nearest date if record does not exist for a particular date
I have a pandas dataframe of stock records, my goal is to pass in a particular 'day' e.g 8 and get the filtered data frame for the 8th of each month and year in the dataset.
I have gone through some SO questions and managed to get one part of my requirement that was getting the records for a particular day, however if the data for say '8th' does not exist for the particular month and year, I need to get the records for the closest day where record exists for this particular month and year.
As an example, if I pass in 8th and there is no record for 8th Jan' 2022, I need to see if records exists for 7th and 9th Jan'22, and so on..and get the record for the nearest date.
If record is present in both 7th and 9th, I will get the record for 9th (higher date).
However, it is possible if the record for 7th exists and 9th does not exist, then I will get the record for 7th (closest).
Code I have written so far
filtered_df = data.loc[(data['Date'].dt.day == 8)]
If the dataset is required, please let me know. I tried to make it clear but if there is any doubt, please let me know. Any help in the correct direction is appreciated.
A:
Alternative 1
Resample to a daily resolution, selecting the nearest day to fill in missing values:
df2 = df.resample('D').nearest()
df2 = df2.loc[df2.index.day == 8]
Alternative 2
A more general method (and a tiny bit faster) is to generate dates/times of your choice, then use reindex() and method 'nearest'. It is more general because you can use any series of timestamps you could come up with (not necessarily aligned with any frequency).
dates = pd.date_range(
start=df.first_valid_index().normalize(), end=df.last_valid_index(),
freq='D')
dates = dates[dates.day == 8]
df2 = df.reindex(dates, method='nearest')
Example
Let's start with a reproducible example:
import yfinance as yf
df = yf.download(['AAPL', 'AMZN'], start='2022-01-01', end='2022-12-31', freq='D')
>>> df.iloc[:10, :5]
Adj Close Close High
AAPL AMZN AAPL AMZN AAPL
Date
2022-01-03 180.959747 170.404495 182.009995 170.404495 182.880005
2022-01-04 178.663086 167.522003 179.699997 167.522003 182.940002
2022-01-05 173.910645 164.356995 174.919998 164.356995 180.169998
2022-01-06 171.007523 163.253998 172.000000 163.253998 175.300003
2022-01-07 171.176529 162.554001 172.169998 162.554001 174.139999
2022-01-10 171.196426 161.485992 172.190002 161.485992 172.500000
2022-01-11 174.069748 165.362000 175.080002 165.362000 175.179993
2022-01-12 174.517136 165.207001 175.529999 165.207001 177.179993
2022-01-13 171.196426 161.214005 172.190002 161.214005 176.619995
2022-01-14 172.071335 162.138000 173.070007 162.138000 173.779999
Now:
df2 = df.resample('D').nearest()
df2 = df2.loc[df2.index.day == 8]
>>> df2.iloc[:5, :5]
Adj Close Close High
AAPL AMZN AAPL AMZN AAPL
2022-01-08 171.176529 162.554001 172.169998 162.554001 174.139999
2022-02-08 174.042633 161.413498 174.830002 161.413498 175.350006
2022-03-08 156.730942 136.014496 157.440002 136.014496 162.880005
2022-04-08 169.323975 154.460495 170.089996 154.460495 171.779999
2022-05-08 151.597595 108.789001 152.059998 108.789001 155.830002
Warning
Replacing a missing day with data from the future (which is what happens when the nearest day is after the missing one) is called peak-ahead and can cause peak-ahead bias in quant research that would use that data. It is usually considered dangerous. You'd be safer using method='ffill'.
|
Get records for the nearest date if record does not exist for a particular date
|
I have a pandas dataframe of stock records, my goal is to pass in a particular 'day' e.g 8 and get the filtered data frame for the 8th of each month and year in the dataset.
I have gone through some SO questions and managed to get one part of my requirement that was getting the records for a particular day, however if the data for say '8th' does not exist for the particular month and year, I need to get the records for the closest day where record exists for this particular month and year.
As an example, if I pass in 8th and there is no record for 8th Jan' 2022, I need to see if records exists for 7th and 9th Jan'22, and so on..and get the record for the nearest date.
If record is present in both 7th and 9th, I will get the record for 9th (higher date).
However, it is possible if the record for 7th exists and 9th does not exist, then I will get the record for 7th (closest).
Code I have written so far
filtered_df = data.loc[(data['Date'].dt.day == 8)]
If the dataset is required, please let me know. I tried to make it clear but if there is any doubt, please let me know. Any help in the correct direction is appreciated.
|
[
"Alternative 1\nResample to a daily resolution, selecting the nearest day to fill in missing values:\ndf2 = df.resample('D').nearest()\ndf2 = df2.loc[df2.index.day == 8]\n\nAlternative 2\nA more general method (and a tiny bit faster) is to generate dates/times of your choice, then use reindex() and method 'nearest'. It is more general because you can use any series of timestamps you could come up with (not necessarily aligned with any frequency).\ndates = pd.date_range(\n start=df.first_valid_index().normalize(), end=df.last_valid_index(),\n freq='D')\ndates = dates[dates.day == 8]\ndf2 = df.reindex(dates, method='nearest')\n\nExample\nLet's start with a reproducible example:\nimport yfinance as yf\n\ndf = yf.download(['AAPL', 'AMZN'], start='2022-01-01', end='2022-12-31', freq='D')\n>>> df.iloc[:10, :5]\n Adj Close Close High\n AAPL AMZN AAPL AMZN AAPL\nDate \n2022-01-03 180.959747 170.404495 182.009995 170.404495 182.880005\n2022-01-04 178.663086 167.522003 179.699997 167.522003 182.940002\n2022-01-05 173.910645 164.356995 174.919998 164.356995 180.169998\n2022-01-06 171.007523 163.253998 172.000000 163.253998 175.300003\n2022-01-07 171.176529 162.554001 172.169998 162.554001 174.139999\n2022-01-10 171.196426 161.485992 172.190002 161.485992 172.500000\n2022-01-11 174.069748 165.362000 175.080002 165.362000 175.179993\n2022-01-12 174.517136 165.207001 175.529999 165.207001 177.179993\n2022-01-13 171.196426 161.214005 172.190002 161.214005 176.619995\n2022-01-14 172.071335 162.138000 173.070007 162.138000 173.779999\n\nNow:\ndf2 = df.resample('D').nearest()\ndf2 = df2.loc[df2.index.day == 8]\n\n>>> df2.iloc[:5, :5]\n Adj Close Close High\n AAPL AMZN AAPL AMZN AAPL\n2022-01-08 171.176529 162.554001 172.169998 162.554001 174.139999\n2022-02-08 174.042633 161.413498 174.830002 161.413498 175.350006\n2022-03-08 156.730942 136.014496 157.440002 136.014496 162.880005\n2022-04-08 169.323975 154.460495 170.089996 154.460495 171.779999\n2022-05-08 151.597595 108.789001 152.059998 108.789001 155.830002\n\nWarning\nReplacing a missing day with data from the future (which is what happens when the nearest day is after the missing one) is called peak-ahead and can cause peak-ahead bias in quant research that would use that data. It is usually considered dangerous. You'd be safer using method='ffill'.\n"
] |
[
1
] |
[] |
[] |
[
"dataframe",
"pandas",
"python"
] |
stackoverflow_0074657984_dataframe_pandas_python.txt
|
Q:
Is it possible to send specific parameters to an object that was already set up?
I am working on a project. I need to change the size of a JPanel, but only the width. As far as I know, there isn't a function to change just the width for a JPanel, so I am thinking whether it is possible to put some symbol or something in the place of other parameters that would keep them the same and just change the width one. Does something like that exist?
This is what I'm referring to
A:
Yes, it is possible to change the width of a JPanel without affecting its other dimensions. You can use the setPreferredSize() method of the JPanel to set its preferred size, and specify only the width in the Dimension object that you pass as the argument to the setPreferredSize() method. Here is an example of how you can do this:
// Create a Dimension object with the desired width and set the height to the current height of the JPanel
Dimension preferredSize = new Dimension(500, panel.getHeight());
// Set the JPanel's preferred size using the Dimension object
panel.setPreferredSize(preferredSize);
// Use the validate() method to update the JPanel's size
panel.validate();
Alternatively, you can use the setBounds() method to set the size and position of the JPanel, and specify only the width and the x and y coordinates in the Rectangle object that you pass as the argument to the setBounds() method. Here is an example of how you can do this:
// Create a Rectangle object with the desired width, x and y coordinates, and set the height to the current height of the JPanel
Rectangle bounds = new Rectangle(100, 100, 500, panel.getHeight());
// Set the JPanel's bounds using the Rectangle object
panel.setBounds(bounds);
// Use the validate() method to update the JPanel's size and position
panel.validate();
I hope this helps. Let me know if you have any further questions.
|
Is it possible to send specific parameters to an object that was already set up?
|
I am working on a project. I need to change the size of a JPanel, but only the width. As far as I know, there isn't a function to change just the width for a JPanel, so I am thinking whether it is possible to put some symbol or something in the place of other parameters that would keep them the same and just change the width one. Does something like that exist?
This is what I'm referring to
|
[
"Yes, it is possible to change the width of a JPanel without affecting its other dimensions. You can use the setPreferredSize() method of the JPanel to set its preferred size, and specify only the width in the Dimension object that you pass as the argument to the setPreferredSize() method. Here is an example of how you can do this:\n// Create a Dimension object with the desired width and set the height to the current height of the JPanel\nDimension preferredSize = new Dimension(500, panel.getHeight());\n\n// Set the JPanel's preferred size using the Dimension object\npanel.setPreferredSize(preferredSize);\n\n// Use the validate() method to update the JPanel's size\npanel.validate();\n\nAlternatively, you can use the setBounds() method to set the size and position of the JPanel, and specify only the width and the x and y coordinates in the Rectangle object that you pass as the argument to the setBounds() method. Here is an example of how you can do this:\n// Create a Rectangle object with the desired width, x and y coordinates, and set the height to the current height of the JPanel\nRectangle bounds = new Rectangle(100, 100, 500, panel.getHeight());\n\n// Set the JPanel's bounds using the Rectangle object\npanel.setBounds(bounds);\n\n// Use the validate() method to update the JPanel's size and position\npanel.validate();\n\nI hope this helps. Let me know if you have any further questions.\n"
] |
[
0
] |
[] |
[] |
[
"java",
"parameter_passing",
"parameters",
"windowbuilder"
] |
stackoverflow_0074658499_java_parameter_passing_parameters_windowbuilder.txt
|
Q:
Error in my next button in second view controller
For some reason, the buttons on the second view controller are not working. When I test the code on my device, its giving me this error:
Thread 1: "-[DepressionApp1.SecondViewController NextButton]: unrecognized selector sent to instance 0x103605560"
Can someone find the error?
Here is the code for my second view controller
import UIKit
class SecondViewController: UIViewController, UINavigationControllerDelegate{
override func viewDidLoad(){
super.viewDidLoad()
}
@IBAction func nextButton(){
let vc = storyboard?.instantiateViewController(withIdentifier: "third") as! ThirdViewController
vc.modalPresentationStyle = .overFullScreen
present(vc,animated: true)
}
@IBAction func prevbutton(){
let vc = storyboard?.instantiateViewController(withIdentifier: "") as! ViewController
vc.modalPresentationStyle = .overFullScreen
present(vc,animated: true)
}
}
Here is the code for my first view controller
import UIKit
class ViewController: UIViewController, UIImagePickerControllerDelegate, UINavigationControllerDelegate {
@IBOutlet weak var imageview: UIImageView!
override func viewDidLoad() {
super.viewDidLoad()
}
@IBAction func Btnimagepicker(_ sender: Any) {
let picker = UIImagePickerController()
picker.allowsEditing=true
picker.delegate=self
present(picker, animated:true)
}
func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey : Any]) {
guard let image=info[.editedImage] as? UIImage else {return}
imageview.image=image
dismiss(animated:true)
}
@IBAction func didTapButton(){
let vc = storyboard?.instantiateViewController(withIdentifier: "second") as! SecondViewController
vc.modalPresentationStyle = .fullScreen
present(vc,animated: true)
}
}
And here is the code for my third view controller
import UIKit
class ThirdViewController: UIViewController {
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view.
}
@IBOutlet weak var Text: UITextView!
/*
// MARK: - Navigation
@IBOutlet weak var Text: UITextView!
// In a storyboard-based application, you will often want to do a little preparation before navigation
override func prepare(for segue: UIStoryboardSegue, sender: Any?) {
// Get the new view controller using segue.destination.
// Pass the selected object to the new view controller.
}
*/
}
Basically, the first view controller is supposed to let the user pick an image and go to the next view controller. Than in the second view controller, the user should be able to go back to the first view controller, or go the third view controller.
A:
Make sure your @IBOutlet is correctly connected to the nib. Most of the times I encountered such an error was due to miss connection or an ild connection that was not removed. If you will upload a link to a project with this issue it will be easy to find and fix
|
Error in my next button in second view controller
|
For some reason, the buttons on the second view controller are not working. When I test the code on my device, its giving me this error:
Thread 1: "-[DepressionApp1.SecondViewController NextButton]: unrecognized selector sent to instance 0x103605560"
Can someone find the error?
Here is the code for my second view controller
import UIKit
class SecondViewController: UIViewController, UINavigationControllerDelegate{
override func viewDidLoad(){
super.viewDidLoad()
}
@IBAction func nextButton(){
let vc = storyboard?.instantiateViewController(withIdentifier: "third") as! ThirdViewController
vc.modalPresentationStyle = .overFullScreen
present(vc,animated: true)
}
@IBAction func prevbutton(){
let vc = storyboard?.instantiateViewController(withIdentifier: "") as! ViewController
vc.modalPresentationStyle = .overFullScreen
present(vc,animated: true)
}
}
Here is the code for my first view controller
import UIKit
class ViewController: UIViewController, UIImagePickerControllerDelegate, UINavigationControllerDelegate {
@IBOutlet weak var imageview: UIImageView!
override func viewDidLoad() {
super.viewDidLoad()
}
@IBAction func Btnimagepicker(_ sender: Any) {
let picker = UIImagePickerController()
picker.allowsEditing=true
picker.delegate=self
present(picker, animated:true)
}
func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey : Any]) {
guard let image=info[.editedImage] as? UIImage else {return}
imageview.image=image
dismiss(animated:true)
}
@IBAction func didTapButton(){
let vc = storyboard?.instantiateViewController(withIdentifier: "second") as! SecondViewController
vc.modalPresentationStyle = .fullScreen
present(vc,animated: true)
}
}
And here is the code for my third view controller
import UIKit
class ThirdViewController: UIViewController {
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view.
}
@IBOutlet weak var Text: UITextView!
/*
// MARK: - Navigation
@IBOutlet weak var Text: UITextView!
// In a storyboard-based application, you will often want to do a little preparation before navigation
override func prepare(for segue: UIStoryboardSegue, sender: Any?) {
// Get the new view controller using segue.destination.
// Pass the selected object to the new view controller.
}
*/
}
Basically, the first view controller is supposed to let the user pick an image and go to the next view controller. Than in the second view controller, the user should be able to go back to the first view controller, or go the third view controller.
|
[
"Make sure your @IBOutlet is correctly connected to the nib. Most of the times I encountered such an error was due to miss connection or an ild connection that was not removed. If you will upload a link to a project with this issue it will be easy to find and fix\n"
] |
[
0
] |
[] |
[] |
[
"ios",
"swift",
"xcode"
] |
stackoverflow_0074658491_ios_swift_xcode.txt
|
Q:
How to adjust the default _ttl value on a DynamoDB table (when deleting with AppSync, using AWS Amplify)
When I delete an item using AppSync (or DataStore) on an AWS Amplify app, it makes two changes to the DynamoDB item:
The delete field is set to true
A _ttl field is added, and a timestamp value is provided of 1 month in the future.
According to the AppSync conflict detection documentation, the value is configured on the DeltaSyncTableTTL value, which is configured on the data source:
_ttl
A numeric value that stores the timestamp, in epoch seconds,
when an item should be removed from the Delta table. This value is
determined by adding the DeltaSyncTableTTL value configured on the
data source to the moment when the change occurred. This field should
be configured as the DynamoDB TTL Attribute.
If I go to my AppSync console, and navigate to 'Data Sources' in the left panel, I'm provided with links to my DynamoDB data sources. But I can't find any settings anywhere in the AppSync or DynamoDB consoles to update a DeltaSyncTableTTL value.
A:
To update the "Base table time to live", go to your AppSync console and click 'Data Sources' on the panel on the left side of the page.
Then, select the datasource/dynamodb table by clicking the round checkbox to the left of the data source and click the 'Edit' button above.
Scroll down to the following and adjust as required:
|
How to adjust the default _ttl value on a DynamoDB table (when deleting with AppSync, using AWS Amplify)
|
When I delete an item using AppSync (or DataStore) on an AWS Amplify app, it makes two changes to the DynamoDB item:
The delete field is set to true
A _ttl field is added, and a timestamp value is provided of 1 month in the future.
According to the AppSync conflict detection documentation, the value is configured on the DeltaSyncTableTTL value, which is configured on the data source:
_ttl
A numeric value that stores the timestamp, in epoch seconds,
when an item should be removed from the Delta table. This value is
determined by adding the DeltaSyncTableTTL value configured on the
data source to the moment when the change occurred. This field should
be configured as the DynamoDB TTL Attribute.
If I go to my AppSync console, and navigate to 'Data Sources' in the left panel, I'm provided with links to my DynamoDB data sources. But I can't find any settings anywhere in the AppSync or DynamoDB consoles to update a DeltaSyncTableTTL value.
|
[
"To update the \"Base table time to live\", go to your AppSync console and click 'Data Sources' on the panel on the left side of the page.\nThen, select the datasource/dynamodb table by clicking the round checkbox to the left of the data source and click the 'Edit' button above.\nScroll down to the following and adjust as required:\n\n"
] |
[
0
] |
[] |
[] |
[
"amazon_dynamodb",
"aws_amplify",
"aws_appsync"
] |
stackoverflow_0074658557_amazon_dynamodb_aws_amplify_aws_appsync.txt
|
Q:
How to script pairwise comparisons in a for loop in zsh using an if statement?
I am running zsh (z shell) on a Mac.
I would like to run pairwise comparisons between all subjects in the list subjects without repeating overlapping comparisons, such as between subject1-subject2 and subject2-subject1. In this example, only the first comparison should be applied by the code.
subjects=(Subject1 Subject2 Subject3 Subject4)
for i in $subjects
do
for j in $subjects
do
if [ $i < $j ]
then
echo "Processing pair $i - $j ..."
fi
done
done
The output I get is:
zsh: no such file or directory: Subject1
zsh: no such file or directory: Subject2
zsh: no such file or directory: Subject3
zsh: no such file or directory: Subject4
zsh: no such file or directory: Subject1
...
What would be the correct operator in if [ $i < $j ] to exclude repeated comparisons? I also tried using if [ "$i" '<' "$j" ] but then I get zsh: condition expected: <
A:
You need to escape the < argument to [ to prevent it from being interpreted as an input redirection:
if [ $i \< $j ]
Or switch to [[, which bypasses regular command parsing.
if [[ $i < $j ]]
The command [ $i < $j ] is equivalent to [ $i ] < $j, with [ being simply another name for the test command, not some kind of shell syntax.
|
How to script pairwise comparisons in a for loop in zsh using an if statement?
|
I am running zsh (z shell) on a Mac.
I would like to run pairwise comparisons between all subjects in the list subjects without repeating overlapping comparisons, such as between subject1-subject2 and subject2-subject1. In this example, only the first comparison should be applied by the code.
subjects=(Subject1 Subject2 Subject3 Subject4)
for i in $subjects
do
for j in $subjects
do
if [ $i < $j ]
then
echo "Processing pair $i - $j ..."
fi
done
done
The output I get is:
zsh: no such file or directory: Subject1
zsh: no such file or directory: Subject2
zsh: no such file or directory: Subject3
zsh: no such file or directory: Subject4
zsh: no such file or directory: Subject1
...
What would be the correct operator in if [ $i < $j ] to exclude repeated comparisons? I also tried using if [ "$i" '<' "$j" ] but then I get zsh: condition expected: <
|
[
"You need to escape the < argument to [ to prevent it from being interpreted as an input redirection:\nif [ $i \\< $j ]\n\nOr switch to [[, which bypasses regular command parsing.\nif [[ $i < $j ]]\n\nThe command [ $i < $j ] is equivalent to [ $i ] < $j, with [ being simply another name for the test command, not some kind of shell syntax.\n"
] |
[
1
] |
[] |
[] |
[
"for_loop",
"if_statement",
"zsh"
] |
stackoverflow_0074658494_for_loop_if_statement_zsh.txt
|
Q:
I don't understand why I have " cannot evaluate a function that has an argument without a value ('fetchurl') " when I try to build a package
I try to build a nix package
a.nix :
{ lib
, stdenv
, fetchurl
, testVersion
, hello
}:
stdenv.mkDerivation {
name = "libfoo-1.2.3";
src = fetchurl {
url = "http://example.org/libfoo-1.2.3.tar.bz2";
sha256 = "0x2g1jqygyr5wiwg4ma1nd7w4ydpy82z9gkcv8vh2v8dn3y58v5m";
};
}
then I execute this bash command
nix-build a.nix
cannot evaluate a function that has an argument without a value ('fetchurl')
someone proposed me to write this command instead:
nix-build '<nixpkgs>' a.nix
It raised this error
Please be informed that this pseudo-package is not the only part of
Nixpkgs that fails to evaluate. You should not evaluate entire Nixpkgs
without some special measures to handle failing packages, like those taken
by Hydra.
I'm not that is the solution to the problem, lib was imported without problems.
Does somebody know where this problem comes from?
A:
If you want to be invokable with nix-build and also with pkgs.callPackage, consider specifying default arguments:
{ pkgs ? import <nixpkgs> { system = builtins.currentSystem; }
, lib ? pkgs.lib
, stdenv ? pkgs.stdenv
, fetchurl ? pkgs.fetchurl
, testVersion ? pkgs.testVersion
, hello ? pkgs.hello
}:
stdenv.mkDerivation {
name = "libfoo-1.2.3";
src = fetchurl {
url = "http://example.org/libfoo-1.2.3.tar.bz2";
sha256 = "0x2g1jqygyr5wiwg4ma1nd7w4ydpy82z9gkcv8vh2v8dn3y58v5m";
};
}
...but I don't generally advise that. Pick one calling convention or the other, don't do both. (In modern times, a flake is a great way to have a consistent structure for your code with inputs reliably pinned: if you had a flake.nix using pkgs.callPackage to invoke your derivation, that would let users invoke nix build pathToYourFlake#nameOfThePackage to build it -- nix build .# if it's the flake's default package and the current directory is a checkout of your flake).
|
I don't understand why I have " cannot evaluate a function that has an argument without a value ('fetchurl') " when I try to build a package
|
I try to build a nix package
a.nix :
{ lib
, stdenv
, fetchurl
, testVersion
, hello
}:
stdenv.mkDerivation {
name = "libfoo-1.2.3";
src = fetchurl {
url = "http://example.org/libfoo-1.2.3.tar.bz2";
sha256 = "0x2g1jqygyr5wiwg4ma1nd7w4ydpy82z9gkcv8vh2v8dn3y58v5m";
};
}
then I execute this bash command
nix-build a.nix
cannot evaluate a function that has an argument without a value ('fetchurl')
someone proposed me to write this command instead:
nix-build '<nixpkgs>' a.nix
It raised this error
Please be informed that this pseudo-package is not the only part of
Nixpkgs that fails to evaluate. You should not evaluate entire Nixpkgs
without some special measures to handle failing packages, like those taken
by Hydra.
I'm not that is the solution to the problem, lib was imported without problems.
Does somebody know where this problem comes from?
|
[
"If you want to be invokable with nix-build and also with pkgs.callPackage, consider specifying default arguments:\n{ pkgs ? import <nixpkgs> { system = builtins.currentSystem; }\n, lib ? pkgs.lib\n, stdenv ? pkgs.stdenv\n, fetchurl ? pkgs.fetchurl\n, testVersion ? pkgs.testVersion\n, hello ? pkgs.hello\n}:\nstdenv.mkDerivation {\n name = \"libfoo-1.2.3\";\n src = fetchurl {\n url = \"http://example.org/libfoo-1.2.3.tar.bz2\";\n sha256 = \"0x2g1jqygyr5wiwg4ma1nd7w4ydpy82z9gkcv8vh2v8dn3y58v5m\";\n };\n}\n\n...but I don't generally advise that. Pick one calling convention or the other, don't do both. (In modern times, a flake is a great way to have a consistent structure for your code with inputs reliably pinned: if you had a flake.nix using pkgs.callPackage to invoke your derivation, that would let users invoke nix build pathToYourFlake#nameOfThePackage to build it -- nix build .# if it's the flake's default package and the current directory is a checkout of your flake).\n"
] |
[
2
] |
[] |
[] |
[
"nix"
] |
stackoverflow_0074658457_nix.txt
|
Q:
Build an "IF/ELSE IF/ELSE" loop in Pine Script and avoid "Mismatched input 'if' expecting 'end of line without line continuation'
As a total beginner in Pine Script, I am trying to iteratively build an array based on the value of two variables. I come from Python experience and, looking at Pine Script documentation, I would expect it to be something similar to what I have tried below.
However, I receive a "Mismatched input 'if' expecting 'end of line without line continuation'" error for the first "else if" statement. Does anyone have a second to find what is probably a trivial issue to a more experienced Pine Script coder? Thank you
Here is what I have at the moment:
//@version=5
indicator('Automated Label')
past_return = input(10, "Old return - Consider return x old candles")
more_recent_return = input(5, "Recent return - Consider return x old candles")
Rtd1 = (close - close[past_return]) / close[past_return]
Rtd2 = (close - close[more_recent_return]) / close[more_recent_return]
pre_label = array.new_float(0)
if Rtd1 > 0 and Rtd2 > 0
if Rtd1 > Rtd2
array.push(pre_label, value=4)
else
array.push(pre_label, value=3)
else if Rtd2 > 0
array.push(pre_label, value=1)
else if Rtd1 < 0 and Rtd2 < 0)
if Rtd1 < Rtd2
array.push(pre_label, -4)
else
array.push(pre_label, -3)
else if Rtd2 < 0
array.push(pre_label, -1)
else
array.push(pre_label,0)
A:
The error is misleading, possibly because it cannot parse the whole if-block properly. The actual cause is an unpared ) in the second else if block:
else if Rtd1 < 0 and Rtd2 < 0)
Delete the last bracket and it should work.
|
Build an "IF/ELSE IF/ELSE" loop in Pine Script and avoid "Mismatched input 'if' expecting 'end of line without line continuation'
|
As a total beginner in Pine Script, I am trying to iteratively build an array based on the value of two variables. I come from Python experience and, looking at Pine Script documentation, I would expect it to be something similar to what I have tried below.
However, I receive a "Mismatched input 'if' expecting 'end of line without line continuation'" error for the first "else if" statement. Does anyone have a second to find what is probably a trivial issue to a more experienced Pine Script coder? Thank you
Here is what I have at the moment:
//@version=5
indicator('Automated Label')
past_return = input(10, "Old return - Consider return x old candles")
more_recent_return = input(5, "Recent return - Consider return x old candles")
Rtd1 = (close - close[past_return]) / close[past_return]
Rtd2 = (close - close[more_recent_return]) / close[more_recent_return]
pre_label = array.new_float(0)
if Rtd1 > 0 and Rtd2 > 0
if Rtd1 > Rtd2
array.push(pre_label, value=4)
else
array.push(pre_label, value=3)
else if Rtd2 > 0
array.push(pre_label, value=1)
else if Rtd1 < 0 and Rtd2 < 0)
if Rtd1 < Rtd2
array.push(pre_label, -4)
else
array.push(pre_label, -3)
else if Rtd2 < 0
array.push(pre_label, -1)
else
array.push(pre_label,0)
|
[
"The error is misleading, possibly because it cannot parse the whole if-block properly. The actual cause is an unpared ) in the second else if block:\nelse if Rtd1 < 0 and Rtd2 < 0)\n\nDelete the last bracket and it should work.\n"
] |
[
1
] |
[] |
[] |
[
"pine_script"
] |
stackoverflow_0074658221_pine_script.txt
|
Q:
how to specialize template class to get pointer from T, where T can be U* or shared_ptr?
Consider the following template:
template<typename T, uint32_t HandleTag = '_ptr'>
struct X
{
void * toPtr(T t)
{
return 0;
}
std::string toHandle(T t)
{
const void *rawptr = toPtr(t);
std::stringstream ss;
for(int i = 24; i >= 0; i -= 8) ss << char((HandleTag >> i) & 0xFF);
ss << ':' << rawptr;
return ss.str();
}
};
where T can be a raw pointer U* or a smart pointer, e.g. shared_ptr<U>.
How to specialize void * X<T,HandleTag>::toPtr(T t) for the two cases?
Not even sure it counts as template specialization, as U is generic, so I'd have to introduce a template arg....
Tried:
template<typename U>
void * X<U*>::toPtr(U* t)
{
return t;
}
template<typename U>
void * X<shared_ptr<U>>::toPtr(shared_ptr<U> t)
{
return t.get();
}
but compiler said:
testptr.cpp:27:15: error: nested name specifier 'X<U *>::' for declaration does not refer into a class, class template or class template partial specialization
void * X<U*>::toPtr(U* t)
~~~~~~~^
testptr.cpp:29:12: error: use of undeclared identifier 't'
return t;
^
testptr.cpp:33:26: error: nested name specifier 'X<shared_ptr<U>>::' for declaration does not refer into a class, class template or class template partial specialization
void * X<shared_ptr<U>>::toPtr(shared_ptr<U> t)
~~~~~~~~~~~~~~~~~~^
testptr.cpp:35:12: error: use of undeclared identifier 't'
return t.get();
^
A:
I think I solved it by overloading the X::toPtr method with templates:
template<typename T, uint32_t HandleTag = '_ptr'>
struct X
{
template<typename U>
void * toPtr(U* t)
{
return t;
}
template<typename U>
void * toPtr(shared_ptr<U> t)
{
return t.get();
}
...
Comments on this solution appreciated.
|
how to specialize template class to get pointer from T, where T can be U* or shared_ptr?
|
Consider the following template:
template<typename T, uint32_t HandleTag = '_ptr'>
struct X
{
void * toPtr(T t)
{
return 0;
}
std::string toHandle(T t)
{
const void *rawptr = toPtr(t);
std::stringstream ss;
for(int i = 24; i >= 0; i -= 8) ss << char((HandleTag >> i) & 0xFF);
ss << ':' << rawptr;
return ss.str();
}
};
where T can be a raw pointer U* or a smart pointer, e.g. shared_ptr<U>.
How to specialize void * X<T,HandleTag>::toPtr(T t) for the two cases?
Not even sure it counts as template specialization, as U is generic, so I'd have to introduce a template arg....
Tried:
template<typename U>
void * X<U*>::toPtr(U* t)
{
return t;
}
template<typename U>
void * X<shared_ptr<U>>::toPtr(shared_ptr<U> t)
{
return t.get();
}
but compiler said:
testptr.cpp:27:15: error: nested name specifier 'X<U *>::' for declaration does not refer into a class, class template or class template partial specialization
void * X<U*>::toPtr(U* t)
~~~~~~~^
testptr.cpp:29:12: error: use of undeclared identifier 't'
return t;
^
testptr.cpp:33:26: error: nested name specifier 'X<shared_ptr<U>>::' for declaration does not refer into a class, class template or class template partial specialization
void * X<shared_ptr<U>>::toPtr(shared_ptr<U> t)
~~~~~~~~~~~~~~~~~~^
testptr.cpp:35:12: error: use of undeclared identifier 't'
return t.get();
^
|
[
"I think I solved it by overloading the X::toPtr method with templates:\ntemplate<typename T, uint32_t HandleTag = '_ptr'>\nstruct X\n{\n template<typename U>\n void * toPtr(U* t)\n {\n return t;\n }\n\n template<typename U>\n void * toPtr(shared_ptr<U> t)\n {\n return t.get();\n }\n\n ...\n\nComments on this solution appreciated.\n"
] |
[
0
] |
[] |
[] |
[
"c++",
"template_specialization",
"templates"
] |
stackoverflow_0074658489_c++_template_specialization_templates.txt
|
Q:
Align horizontally form containing select
``
<div class="container">
<form class="horizontal" action="https://homirent.cloudbeds.com/#/" method="get">
<p class="p1"> City: <select name="city">
<option value="all">All Cities</option>
<option>Cancún</option>
<option>Ciudad De México</option>
<option>Santiago De Querétaro</option>
</select>
Check-In:
<input type="text" name="check_in" placeholder="01/01/2023"/>
Check-Out:
<input type="text" name="check_out" placeholder="02/01/2023"/>
<input type="submit" /> </p>
</form>
</div>
``
We would like to be able to align these forms that contain checkboxes horizontally like the following example:
enter image description here
Tried using display: inline-block on selects instead of float: left however not all elements lined up as expected
A:
You can wrap every form element in another class and then use display: inline-block
Check this out https://jsfiddle.net/rx4hvn/3Ldp90e6/
<div class="form-control">
<label>City</label>
<select name="city">
<option value="all">All Cities</option>
<option>Cancún</option>
<option>Ciudad De México</option>
<option>Santiago De Querétaro</option>
</select>
</div>
.form-control {
display: inline-block;
}
|
Align horizontally form containing select
|
``
<div class="container">
<form class="horizontal" action="https://homirent.cloudbeds.com/#/" method="get">
<p class="p1"> City: <select name="city">
<option value="all">All Cities</option>
<option>Cancún</option>
<option>Ciudad De México</option>
<option>Santiago De Querétaro</option>
</select>
Check-In:
<input type="text" name="check_in" placeholder="01/01/2023"/>
Check-Out:
<input type="text" name="check_out" placeholder="02/01/2023"/>
<input type="submit" /> </p>
</form>
</div>
``
We would like to be able to align these forms that contain checkboxes horizontally like the following example:
enter image description here
Tried using display: inline-block on selects instead of float: left however not all elements lined up as expected
|
[
"You can wrap every form element in another class and then use display: inline-block\nCheck this out https://jsfiddle.net/rx4hvn/3Ldp90e6/\n <div class=\"form-control\">\n <label>City</label>\n <select name=\"city\">\n <option value=\"all\">All Cities</option>\n <option>Cancún</option>\n <option>Ciudad De México</option>\n <option>Santiago De Querétaro</option>\n </select>\n </div>\n\n.form-control {\n display: inline-block;\n}\n\n"
] |
[
0
] |
[] |
[] |
[
"css",
"html"
] |
stackoverflow_0074658358_css_html.txt
|
Q:
How to upload iOS image from gallery via multipart/form data withSwift 5
I have picked UIImage object from an iOS gallery and want to send by HTTP on API using multipart/form-data. It's working with Postman and Node.js but I having a problem with same request by Swift.
Initialization part
import Foundation
var image: UIImage?
let postURL = "http://......"
extension NSMutableData {
func appendString(_ string: String) {
if let data = string.data(using: .utf8) {
self.append(data)
}
}
}
Request preparation
let imageJpeg: Data = image?.jpegData(compressionQuality: 1)
func convertFiledData(fieldName: String, fileName: String, mimeType: String, fileData: Data, fileString: String, using boundary: String) -> Data {
let data = NSMutableData()
data.appendString("--\(boundary)\r\n")
data.appendString("Content-Disposition: form-data; name=\"\(fieldName)\"; file=\"\(fileName)\"\r\n")
data.appendString("Content-Type: \(mimeType)\r\n\r\n")
data.append(fileData)
data.appendString("\r\n")
data.appendString("--\(boundary)--\r\n")
return data as Data
}
let boundary = "Boundary-\(UUID().uuidString)"
var request = URLRequest(url: URL(string: postURL)!, timeoutInterval: Double.infinity)
request.addValue("application/json", forHTTPHeaderField: "Accept")
request.addValue("multipart/form-data; boundary=\(boundary)", forHTTPHeaderField: "Content-Type")
request.httpMethod = "POST"
let mimeType = "image/jpeg"
let httpBody = convertFiledData(fieldName: "file",
fileName: "xray1.jpg",
mimeType: mimeType,
fileData: imageJpeg,
fileString: imageString,
using: boundary)
request.httpBody = httpBody as Data
Request part
URLSession.shared.dataTask(with: request) { (data, res, err) in
do {
if let data = data {
let result = try JSONDecoder().decode(PostModel.self, from: data )
} else {
print("No data")
}
} catch (let error) {
print(error.localizedDescription)
}
}.resume()
Request is working well. Body assembling as need. I have 422 HTTP error code.
There is res object shows
Optional(<NSHTTPURLResponse: 0x600001e9e0a0> { URL: http://.... } { Status Code: 422, Headers {
"Content-Length" = (
110
);
"Content-Type" = (
"application/json"
);
Date = (
"Mon, 20 Dec 2021 04:30:59 GMT"
);
Server = (
uvicorn
);
} })
This is data from URLSession Data sent by FastAPi server
{"detail":[{"loc":["body","file"],"msg":"Expected UploadFile, received: <class 'str'>","type":"value_error"}]}
It's looks like a problem with file data format. Probably in .jsonData() part. I tried some other transformations but without any result.
Any idea? Thanks!
A:
I don't know if you solved this issue, but I think the problem may reside in this line:
data.appendString("Content-Disposition: form-data; name=\"\(fieldName)\"; file=\"\(fileName)\"\r\n")
the last part should be filename=\"\(fileName)\"\r\n.
I was getting a similar error response, but in my case the problem was I didn't pass the extension name .jpg after the file name, some servers don't recognize your image file without an extension name.
Leaving this answer here as a reference in case people run into a similar problem.
|
How to upload iOS image from gallery via multipart/form data withSwift 5
|
I have picked UIImage object from an iOS gallery and want to send by HTTP on API using multipart/form-data. It's working with Postman and Node.js but I having a problem with same request by Swift.
Initialization part
import Foundation
var image: UIImage?
let postURL = "http://......"
extension NSMutableData {
func appendString(_ string: String) {
if let data = string.data(using: .utf8) {
self.append(data)
}
}
}
Request preparation
let imageJpeg: Data = image?.jpegData(compressionQuality: 1)
func convertFiledData(fieldName: String, fileName: String, mimeType: String, fileData: Data, fileString: String, using boundary: String) -> Data {
let data = NSMutableData()
data.appendString("--\(boundary)\r\n")
data.appendString("Content-Disposition: form-data; name=\"\(fieldName)\"; file=\"\(fileName)\"\r\n")
data.appendString("Content-Type: \(mimeType)\r\n\r\n")
data.append(fileData)
data.appendString("\r\n")
data.appendString("--\(boundary)--\r\n")
return data as Data
}
let boundary = "Boundary-\(UUID().uuidString)"
var request = URLRequest(url: URL(string: postURL)!, timeoutInterval: Double.infinity)
request.addValue("application/json", forHTTPHeaderField: "Accept")
request.addValue("multipart/form-data; boundary=\(boundary)", forHTTPHeaderField: "Content-Type")
request.httpMethod = "POST"
let mimeType = "image/jpeg"
let httpBody = convertFiledData(fieldName: "file",
fileName: "xray1.jpg",
mimeType: mimeType,
fileData: imageJpeg,
fileString: imageString,
using: boundary)
request.httpBody = httpBody as Data
Request part
URLSession.shared.dataTask(with: request) { (data, res, err) in
do {
if let data = data {
let result = try JSONDecoder().decode(PostModel.self, from: data )
} else {
print("No data")
}
} catch (let error) {
print(error.localizedDescription)
}
}.resume()
Request is working well. Body assembling as need. I have 422 HTTP error code.
There is res object shows
Optional(<NSHTTPURLResponse: 0x600001e9e0a0> { URL: http://.... } { Status Code: 422, Headers {
"Content-Length" = (
110
);
"Content-Type" = (
"application/json"
);
Date = (
"Mon, 20 Dec 2021 04:30:59 GMT"
);
Server = (
uvicorn
);
} })
This is data from URLSession Data sent by FastAPi server
{"detail":[{"loc":["body","file"],"msg":"Expected UploadFile, received: <class 'str'>","type":"value_error"}]}
It's looks like a problem with file data format. Probably in .jsonData() part. I tried some other transformations but without any result.
Any idea? Thanks!
|
[
"I don't know if you solved this issue, but I think the problem may reside in this line:\ndata.appendString(\"Content-Disposition: form-data; name=\\\"\\(fieldName)\\\"; file=\\\"\\(fileName)\\\"\\r\\n\")\nthe last part should be filename=\\\"\\(fileName)\\\"\\r\\n.\nI was getting a similar error response, but in my case the problem was I didn't pass the extension name .jpg after the file name, some servers don't recognize your image file without an extension name.\nLeaving this answer here as a reference in case people run into a similar problem.\n"
] |
[
0
] |
[] |
[] |
[
"file_upload",
"httprequest",
"ios",
"multipartform_data",
"swift"
] |
stackoverflow_0070417762_file_upload_httprequest_ios_multipartform_data_swift.txt
|
Q:
Kafka service broken after applying ACL
I'm configuring a kafka 3-node cluster (version 3.2.0) on which I plan to use ACL for authorization. For the moment I am using SASL for authentication and StandardAuthorizer for authorization (I am using kraft).
I set the ACL successfully with this command :
/usr/local/kafka/bin/kafka-acls.sh --command-config /usr/local/kafka/config/kraft/adminclient-config.conf --bootstrap-server <broker hostname>:9092 --add --allow-principal User:* --allow-host <ip> --operation Read --operation Write --topic <topic name>
But then whenever I restart a broker it fails with a similar error:
ERROR [StandardAuthorizer 1] addAcl error (org.apache.kafka.metadata.authorizer.StandardAuthorizerData)
java.lang.RuntimeException: An ACL with ID JjIHfwV4TMi5yo9oPXMxWw already exists.
It seems like it always tries to reapply the ACL, is this normal?
How can I fix this?
Thanks
I tried to exclude authentication issues removing the SSL settings and keeping just the SASL settings.
I would expect that on a cluster setup the addition or removal of an ACL is propagated to all the brokers, and if not at least that the broker state were not broken.
A:
we have a cluster with the same configuration. We are facing problems configuring ACLs too. But in our case we are not able to make work the StandardAuthorizer. When starting the server, it raises a AuthorizerNotReadyException.
This is our configuration to enable the ACL authorizer in server.properties:
authorizer.class.name=org.apache.kafka.metadata.authorizer.StandardAuthorizer
allow.everyone.if.no.acl.found=true
super.users=User:admin
Respect to your server.properties configuration file, do you find any difference?
Maybe if we succeed to run the same configuration, we can see if we incur in the same problem your are experiencing and look for a solution.
|
Kafka service broken after applying ACL
|
I'm configuring a kafka 3-node cluster (version 3.2.0) on which I plan to use ACL for authorization. For the moment I am using SASL for authentication and StandardAuthorizer for authorization (I am using kraft).
I set the ACL successfully with this command :
/usr/local/kafka/bin/kafka-acls.sh --command-config /usr/local/kafka/config/kraft/adminclient-config.conf --bootstrap-server <broker hostname>:9092 --add --allow-principal User:* --allow-host <ip> --operation Read --operation Write --topic <topic name>
But then whenever I restart a broker it fails with a similar error:
ERROR [StandardAuthorizer 1] addAcl error (org.apache.kafka.metadata.authorizer.StandardAuthorizerData)
java.lang.RuntimeException: An ACL with ID JjIHfwV4TMi5yo9oPXMxWw already exists.
It seems like it always tries to reapply the ACL, is this normal?
How can I fix this?
Thanks
I tried to exclude authentication issues removing the SSL settings and keeping just the SASL settings.
I would expect that on a cluster setup the addition or removal of an ACL is propagated to all the brokers, and if not at least that the broker state were not broken.
|
[
"we have a cluster with the same configuration. We are facing problems configuring ACLs too. But in our case we are not able to make work the StandardAuthorizer. When starting the server, it raises a AuthorizerNotReadyException.\nThis is our configuration to enable the ACL authorizer in server.properties:\nauthorizer.class.name=org.apache.kafka.metadata.authorizer.StandardAuthorizer\nallow.everyone.if.no.acl.found=true\nsuper.users=User:admin\n\nRespect to your server.properties configuration file, do you find any difference?\nMaybe if we succeed to run the same configuration, we can see if we incur in the same problem your are experiencing and look for a solution.\n"
] |
[
0
] |
[] |
[] |
[
"apache_kafka",
"kraft"
] |
stackoverflow_0074208674_apache_kafka_kraft.txt
|
Q:
disable web middleware for specific routes in laravel 5.2
I want to guest users have access to home page but in built in authentication process laravel redirects to login page. how can i give guest users access to home page?
my routes.php:
Route::group(['middleware' => 'web'], function () {
Route::auth();
Route::get('/', 'HomeController@index');
Route::get('/insert', 'HomeController@insertform');
Route::get('/job/{id}', 'JobsController@show');
Route::get('/city/{city}', 'JobsController@city');
Route::post('/insert', 'HomeController@insert');
Route::get('/cityinsert', 'HomeController@cityinsert');
Route::post('/cityinsert', 'HomeController@cityinsertpost');
});
and authenticate.php
class Authenticate
{
/**
* Handle an incoming request.
*
* @param \Illuminate\Http\Request $request
* @param \Closure $next
* @param string|null $guard
* @return mixed
*/
public function handle($request, Closure $next, $guard = null)
{
if (Auth::guard($guard)->guest()) {
if ($request->ajax()) {
return response('Unauthorized.', 401);
} else {
return redirect()->guest('login');
}
}
return $next($request);
}
}
and this is my kernel.php
class Kernel extends HttpKernel
{
/**
* The application's global HTTP middleware stack.
*
* These middleware are run during every request to your application.
*
* @var array
*/
protected $middleware = [
\Illuminate\Foundation\Http\Middleware\CheckForMaintenanceMode::class,
];
/**
* The application's route middleware groups.
*
* @var array
*/
protected $middlewareGroups = [
'web' => [
\App\Http\Middleware\EncryptCookies::class,
\Illuminate\Cookie\Middleware\AddQueuedCookiesToResponse::class,
\Illuminate\Session\Middleware\StartSession::class,
\Illuminate\View\Middleware\ShareErrorsFromSession::class,
\App\Http\Middleware\VerifyCsrfToken::class,
],
'api' => [
'throttle:60,1',
],
];
/**
* The application's route middleware.
*
* These middleware may be assigned to groups or used individually.
*
* @var array
*/
protected $routeMiddleware = [
'auth' => \App\Http\Middleware\Authenticate::class,
'auth.basic' => \Illuminate\Auth\Middleware\AuthenticateWithBasicAuth::class,
'guest' => \App\Http\Middleware\RedirectIfAuthenticated::class,
'throttle' => \Illuminate\Routing\Middleware\ThrottleRequests::class,
];
}
A:
I prefer to exclude middleware via routes. You can do it in two ways:
Single action:
Route::post('login', 'LoginController@login')->withoutMiddleware(['auth']);
Group mode:
Route::group([
'prefix' => 'forgot-password',
'excluded_middleware' => ['auth'],
], function () {
Route::post('send-email', 'ForgotPasswordController@sendEmail');
Route::post('save-new-password', 'ForgotPasswordController@saveNewPassword');
});
Tested on Laravel 7.7
A:
Add an exception in the middleware declaration in the construct
Route::get('/', 'HomeController@index');
for the above route to be exempted from authentication you should pass the function name to the middleware like below
class HomeController extends Controller
{
/**
* Create a new controller instance.
*
* @return void
*/
public function __construct()
{
$this->middleware('auth', ['except' => 'index']);
}
}
A:
Remove the middleware from HomeController construct:
class HomeController extends Controller
{
/**
* Create a new controller instance.
*
* @return void
*/
public function __construct()
{
//$this->middleware('auth');
}
}
A:
I can add to Sidharth answer, that you can use several methods exeption, by including them in array:
class HomeController extends Controller
{
/**
* Create a new controller instance.
*
* @return void
*/
public function __construct()
{
$this->middleware('auth', ['except' => ['index', 'show']]);
}
}
Laravel 5.5 tested.
A:
You can also separate between middleware and except. Try this one :
/**
* Create a new controller instance.
*
* @return void
*/
public function __construct()
{
$this->middleware('guest')->except([
'submitLogout',
'showUserDetail'
]);
}
Tested on Laravel 5.4
A:
Add except URL to VerifyCsrfToken
app/http/middleware/VerifyCsrfToken.php
<?php
namespace App\Http\Middleware;
use Illuminate\Foundation\Http\Middleware\VerifyCsrfToken as Middleware;
class VerifyCsrfToken extends Middleware
{
/**
* The URIs that should be excluded from CSRF verification.
*
* @var array
*/
protected $except = [
'stripe/*',
'http://example.com/foo/bar',
'http://example.com/foo/*',
];
}
Source: Laravel Documentation CSRF exclude URL
*Tested on Lavarel 7.0 as well
A:
Recently I need that functionality in an old Laravel project.
God bless Laravel for macroable feature :)
AppServiceProvider.php
public function boot()
{
Route::macro('withoutMiddleware', function ($excludedMiddlewares) {
$this->action['middleware'] = array_filter(
$this->action['middleware'],
function ($middleware) use ($excludedMiddlewares) {
return !in_array($middleware, $excludedMiddlewares);
});
return $this;
});
}
Then you can use it like this:
Route::get('something')->withoutMiddleware(['auth']);
|
disable web middleware for specific routes in laravel 5.2
|
I want to guest users have access to home page but in built in authentication process laravel redirects to login page. how can i give guest users access to home page?
my routes.php:
Route::group(['middleware' => 'web'], function () {
Route::auth();
Route::get('/', 'HomeController@index');
Route::get('/insert', 'HomeController@insertform');
Route::get('/job/{id}', 'JobsController@show');
Route::get('/city/{city}', 'JobsController@city');
Route::post('/insert', 'HomeController@insert');
Route::get('/cityinsert', 'HomeController@cityinsert');
Route::post('/cityinsert', 'HomeController@cityinsertpost');
});
and authenticate.php
class Authenticate
{
/**
* Handle an incoming request.
*
* @param \Illuminate\Http\Request $request
* @param \Closure $next
* @param string|null $guard
* @return mixed
*/
public function handle($request, Closure $next, $guard = null)
{
if (Auth::guard($guard)->guest()) {
if ($request->ajax()) {
return response('Unauthorized.', 401);
} else {
return redirect()->guest('login');
}
}
return $next($request);
}
}
and this is my kernel.php
class Kernel extends HttpKernel
{
/**
* The application's global HTTP middleware stack.
*
* These middleware are run during every request to your application.
*
* @var array
*/
protected $middleware = [
\Illuminate\Foundation\Http\Middleware\CheckForMaintenanceMode::class,
];
/**
* The application's route middleware groups.
*
* @var array
*/
protected $middlewareGroups = [
'web' => [
\App\Http\Middleware\EncryptCookies::class,
\Illuminate\Cookie\Middleware\AddQueuedCookiesToResponse::class,
\Illuminate\Session\Middleware\StartSession::class,
\Illuminate\View\Middleware\ShareErrorsFromSession::class,
\App\Http\Middleware\VerifyCsrfToken::class,
],
'api' => [
'throttle:60,1',
],
];
/**
* The application's route middleware.
*
* These middleware may be assigned to groups or used individually.
*
* @var array
*/
protected $routeMiddleware = [
'auth' => \App\Http\Middleware\Authenticate::class,
'auth.basic' => \Illuminate\Auth\Middleware\AuthenticateWithBasicAuth::class,
'guest' => \App\Http\Middleware\RedirectIfAuthenticated::class,
'throttle' => \Illuminate\Routing\Middleware\ThrottleRequests::class,
];
}
|
[
"I prefer to exclude middleware via routes. You can do it in two ways:\n\nSingle action:\n\nRoute::post('login', 'LoginController@login')->withoutMiddleware(['auth']);\n\n\nGroup mode:\n\nRoute::group([\n 'prefix' => 'forgot-password',\n 'excluded_middleware' => ['auth'],\n], function () {\n Route::post('send-email', 'ForgotPasswordController@sendEmail');\n Route::post('save-new-password', 'ForgotPasswordController@saveNewPassword');\n});\n\nTested on Laravel 7.7\n",
"Add an exception in the middleware declaration in the construct\nRoute::get('/', 'HomeController@index');\n\nfor the above route to be exempted from authentication you should pass the function name to the middleware like below\nclass HomeController extends Controller\n{\n /**\n * Create a new controller instance.\n *\n * @return void\n */\n public function __construct()\n {\n $this->middleware('auth', ['except' => 'index']);\n }\n}\n\n",
"Remove the middleware from HomeController construct:\nclass HomeController extends Controller\n{\n /**\n * Create a new controller instance.\n *\n * @return void\n */\n public function __construct()\n {\n //$this->middleware('auth');\n }\n}\n\n",
"I can add to Sidharth answer, that you can use several methods exeption, by including them in array:\nclass HomeController extends Controller\n{\n /**\n * Create a new controller instance.\n *\n * @return void\n */\n public function __construct()\n {\n $this->middleware('auth', ['except' => ['index', 'show']]);\n }\n}\n\nLaravel 5.5 tested.\n",
"You can also separate between middleware and except. Try this one :\n/**\n * Create a new controller instance.\n *\n * @return void\n */\npublic function __construct()\n{\n $this->middleware('guest')->except([\n 'submitLogout',\n 'showUserDetail'\n ]);\n}\n\nTested on Laravel 5.4\n",
"Add except URL to VerifyCsrfToken\napp/http/middleware/VerifyCsrfToken.php\n\n<?php\n\nnamespace App\\Http\\Middleware;\n\nuse Illuminate\\Foundation\\Http\\Middleware\\VerifyCsrfToken as Middleware;\n\nclass VerifyCsrfToken extends Middleware\n{\n /**\n * The URIs that should be excluded from CSRF verification.\n *\n * @var array\n */\n protected $except = [\n 'stripe/*',\n 'http://example.com/foo/bar',\n 'http://example.com/foo/*',\n ];\n}\n\nSource: Laravel Documentation CSRF exclude URL\n*Tested on Lavarel 7.0 as well\n",
"Recently I need that functionality in an old Laravel project.\nGod bless Laravel for macroable feature :)\nAppServiceProvider.php\npublic function boot()\n {\n Route::macro('withoutMiddleware', function ($excludedMiddlewares) {\n $this->action['middleware'] = array_filter(\n $this->action['middleware'],\n function ($middleware) use ($excludedMiddlewares) {\n return !in_array($middleware, $excludedMiddlewares);\n });\n\n return $this;\n });\n }\n\nThen you can use it like this:\nRoute::get('something')->withoutMiddleware(['auth']);\n\n"
] |
[
36,
31,
12,
4,
3,
0,
0
] |
[] |
[] |
[
"authentication",
"laravel",
"laravel_5.2",
"middleware",
"php"
] |
stackoverflow_0035377001_authentication_laravel_laravel_5.2_middleware_php.txt
|
Q:
ImportError: No module named wget
Please help me to find reason on MacOS why when I including library
import wget
I'm getting error
File "/Users/xx/python/import.py", line 4, in <module>
import wget
ImportError: No module named wget
This library is installed
xx$ pip3 install wget
Requirement already satisfied: wget in /usr/local/lib/python3.6/site-packages (3.2)
I just suppose that some path is not set, but I don't know how to prove this.
Please help me find solution for this problem.
A:
Try pip install wget, maybe you’re using python 2
A:
With pip3 you are installing module for python 3,
It can b that you have both versions of python 2 and 3 and you your environment is pointing default to python 2
Check python version or install wget for python 2
python -V
pip install wget
A:
this should not be the case, but check if site-packages is in the path for accessing modules
>>> import sys
>>> sys.path
[..., '...\\python3.6\\lib\\site-packages', ...] ## if this is here I cannot help you
if not, try repairing python
you can do that by clicking setup file (one with which you installed in the first place),
and among 3 options click repair
A:
If you process the python script by command:
python import.py
or
python3 import.py
it should work.
But if you process the executable python script by command:
./import.py ENTER
then incldue as the first line of the script import.py:
#!/usr/bin/env python
or
#!/usr/bin/env python3
A:
sudo apt-get install --reinstall python3-wget
A:
The following command worked for me in Jupyter Lab
!pip install wget
Hope this does help!
A:
In Jupyter Lab, although my Python was 3.9 but it was using 3.7 paths (I have multiple Pythons installed):
import sys
sys.path
['D:\\Projects',
'C:\\Program Files\\Python37\\python37.zip',
'C:\\Program Files\\Python37\\DLLs',
'C:\\Program Files\\Python37\\lib',
'C:\\Program Files\\Python37',
'',
'C:\\Users\\John\\AppData\\Roaming\\Python\\Python37\\site-packages',
'C:\\Program Files\\Python37\\lib\\site-packages',
'C:\\Program Files\\Python37\\lib\\site-packages\\win32',
'C:\\Program Files\\Python37\\lib\\site-packages\\win32\\lib',
'C:\\Program Files\\Python37\\lib\\site-packages\\Pythonwin',
'C:\\Program Files\\Python37\\lib\\site-packages\\IPython\\extensions',
'C:\\Users\\John\\.ipython']
So I did !pip3.7 install --user wget, and then it worked.
A:
pip install wget
if in colab use:
!pip install wget
A:
I had the same problem recently and using python3 instead of py worked for me.
|
ImportError: No module named wget
|
Please help me to find reason on MacOS why when I including library
import wget
I'm getting error
File "/Users/xx/python/import.py", line 4, in <module>
import wget
ImportError: No module named wget
This library is installed
xx$ pip3 install wget
Requirement already satisfied: wget in /usr/local/lib/python3.6/site-packages (3.2)
I just suppose that some path is not set, but I don't know how to prove this.
Please help me find solution for this problem.
|
[
"Try pip install wget, maybe you’re using python 2\n",
"With pip3 you are installing module for python 3,\nIt can b that you have both versions of python 2 and 3 and you your environment is pointing default to python 2 \nCheck python version or install wget for python 2\npython -V \npip install wget\n\n",
"this should not be the case, but check if site-packages is in the path for accessing modules\n>>> import sys\n>>> sys.path\n[..., '...\\\\python3.6\\\\lib\\\\site-packages', ...] ## if this is here I cannot help you\n\nif not, try repairing python\nyou can do that by clicking setup file (one with which you installed in the first place),\nand among 3 options click repair\n",
"If you process the python script by command:\npython import.py\n\nor \npython3 import.py\n\nit should work.\nBut if you process the executable python script by command:\n./import.py ENTER\n\nthen incldue as the first line of the script import.py:\n#!/usr/bin/env python\n\nor\n#!/usr/bin/env python3\n\n",
"sudo apt-get install --reinstall python3-wget\n\n",
"The following command worked for me in Jupyter Lab\n!pip install wget\n\nHope this does help!\n",
"In Jupyter Lab, although my Python was 3.9 but it was using 3.7 paths (I have multiple Pythons installed):\nimport sys\n\nsys.path\n\n['D:\\\\Projects',\n 'C:\\\\Program Files\\\\Python37\\\\python37.zip',\n 'C:\\\\Program Files\\\\Python37\\\\DLLs',\n 'C:\\\\Program Files\\\\Python37\\\\lib',\n 'C:\\\\Program Files\\\\Python37',\n '',\n 'C:\\\\Users\\\\John\\\\AppData\\\\Roaming\\\\Python\\\\Python37\\\\site-packages',\n 'C:\\\\Program Files\\\\Python37\\\\lib\\\\site-packages',\n 'C:\\\\Program Files\\\\Python37\\\\lib\\\\site-packages\\\\win32',\n 'C:\\\\Program Files\\\\Python37\\\\lib\\\\site-packages\\\\win32\\\\lib',\n 'C:\\\\Program Files\\\\Python37\\\\lib\\\\site-packages\\\\Pythonwin',\n 'C:\\\\Program Files\\\\Python37\\\\lib\\\\site-packages\\\\IPython\\\\extensions',\n 'C:\\\\Users\\\\John\\\\.ipython']\n\nSo I did !pip3.7 install --user wget, and then it worked.\n",
"pip install wget\n\nif in colab use:\n!pip install wget\n\n",
"I had the same problem recently and using python3 instead of py worked for me.\n"
] |
[
14,
4,
1,
1,
1,
0,
0,
0,
0
] |
[] |
[] |
[
"macos",
"python",
"wget"
] |
stackoverflow_0051069716_macos_python_wget.txt
|
Q:
Problem installing github-cli-2.20.2-r0 on Alpine Linux v3.17 running on WSL-2
I am trying to install the github-cli-2.20.2-r0 on Alpine Linux v3.17 running on WSL-2
I have the following repositories on my system:
cat /etc/apk/repositories
http://dl-cdn.alpinelinux.org/alpine/latest-stable/main
http://dl-cdn.alpinelinux.org/alpine/latest-stable/community
http://dl-cdn.alpinelinux.org/alpine/edge/main
http://dl-cdn.alpinelinux.org/alpine/edge/community
http://dl-cdn.alpinelinux.org/alpine/edge/testing
I did an apk search for the githb cli
apk search github
foliate-2.6.4-r1
github-cli-2.20.2-r0
github-cli-zsh-completion-2.20.2-r0
py3-pygithub-1.57-r0
github-cli-bash-completion-2.20.2-r0
tootle-1.0-r2
github-cli-doc-2.20.2-r0
Then tried an install
sudo apk add github-cli-2.20.2-r0
ERROR: unable to select packages:
github-cli-2.20.2-r0 (no such package):
required by: world[github-cli-2.20.2-r0]
What am I doing wrong?
A:
Removing the version worked for me:
sudo apk add github-cli
|
Problem installing github-cli-2.20.2-r0 on Alpine Linux v3.17 running on WSL-2
|
I am trying to install the github-cli-2.20.2-r0 on Alpine Linux v3.17 running on WSL-2
I have the following repositories on my system:
cat /etc/apk/repositories
http://dl-cdn.alpinelinux.org/alpine/latest-stable/main
http://dl-cdn.alpinelinux.org/alpine/latest-stable/community
http://dl-cdn.alpinelinux.org/alpine/edge/main
http://dl-cdn.alpinelinux.org/alpine/edge/community
http://dl-cdn.alpinelinux.org/alpine/edge/testing
I did an apk search for the githb cli
apk search github
foliate-2.6.4-r1
github-cli-2.20.2-r0
github-cli-zsh-completion-2.20.2-r0
py3-pygithub-1.57-r0
github-cli-bash-completion-2.20.2-r0
tootle-1.0-r2
github-cli-doc-2.20.2-r0
Then tried an install
sudo apk add github-cli-2.20.2-r0
ERROR: unable to select packages:
github-cli-2.20.2-r0 (no such package):
required by: world[github-cli-2.20.2-r0]
What am I doing wrong?
|
[
"Removing the version worked for me:\n\nsudo apk add github-cli\n\n"
] |
[
0
] |
[] |
[] |
[
"apk",
"github"
] |
stackoverflow_0074657694_apk_github.txt
|
Q:
How can you export the Visual Studio Code extension list?
I need to send all my installed extensions to my colleagues. How can I export them?
The extension manager seems to do nothing... It won't install any extension.
A:
Automatic
If you are looking forward to an easy one-stop tool to do it for you, I would suggest you to look into the Settings Sync extension.
It will allow
Export of your configuration and extensions
Share it with coworkers and teams. You can update the configuration. Their settings will auto updated.
Manual
Make sure you have the most current version of Visual Studio Code. If you install via a company portal, you might not have the most current version.
On machine A
Unix:
code --list-extensions | xargs -L 1 echo code --install-extension
Windows (PowerShell, e. g. using Visual Studio Code's integrated Terminal):
code --list-extensions | % { "code --install-extension $_" }
Copy and paste the echo output to machine B
Sample output
code --install-extension Angular.ng-template
code --install-extension DSKWRK.vscode-generate-getter-setter
code --install-extension EditorConfig.EditorConfig
code --install-extension HookyQR.beautify
Please make sure you have the code command line installed. For more information, please visit Command Line Interface (CLI).
A:
I've needed to do this myself a few times - especially when installing on another machine.
Common questions will give you the location of your folder
Visual Studio Code looks for extensions under your extensions folder .vscode/extensions. Depending on your platform it is located:
Windows %USERPROFILE%\.vscode\extensions
Mac ~/.vscode/extensions
Linux ~/.vscode/extensions
That should show you a list of the extensions.
I've also had success using Visual Studio Code Settings Sync Extension to sync settings to GitHub gist.
In the latest release of Visual Studio Code (May 2016), it is now possible to list the installed extensions on the command line:
code --list-extensions
A:
I have developed an extension which will synchronise your all Visual Studio Code settings across multiple instances.
Key Features
Use your GitHub account token.
Easy to upload and download on one click.
Saves all settings and snippets files.
Upload key: Shift + Alt + U
Download key: Shift + Alt + D
Type Sync In Order to View all sync options
It synchronises the
Settings file
Keybinding file
Launch file
Snippets folder
Visual Studio Code extensions
Detail Documentation Source
Visual Studio Code Sync ReadMe
Download here: Visual Studio Code Settings Sync
A:
Windows (PowerShell) version of Benny's answer
Machine A:
In the Visual Studio Code PowerShell terminal:
code --list-extensions > extensions.list
Machine B:
Copy extension.list to the machine B
In the Visual Studio Code PowerShell terminal:
cat extensions.list |% { code --install-extension $_}
A:
I used the following command to copy my extensions from Visual Studio Code to Visual Studio Code insiders:
code --list-extensions | xargs -L 1 code-insiders --install-extension
The argument -L 1 allows us to execute the command code-insiders --install-extension once for each input line generated by code --list-extensions.
A:
For Linux
On the old machine:
code --list-extensions > vscode-extensions.list
On the new machine:
cat vscode-extensions.list | xargs -L 1 code --install-extension
A:
If using bash, you can use the following commands:
Export extensions
code --list-extensions |
xargs -L 1 echo code --install-extension |
sed 's/$/ --force/' |
sed '$!s/$/ \&/' > install-extensions.sh
With bash alias:
# eve - export vscode extensions
alias eve="code --list-extensions |
xargs -L 1 echo code --install-extension |
sed 's/$/ --force/' |
sed '\$!s/$/ \&/' > install-extensions.sh"
Just run eve.
Install extensions
sh install-extensions.sh
A:
Dump extensions:
code --list-extensions > extensions.txt
Install extensions with Bash (Linux, OS X and WSL):
cat extensions.txt | xargs code --list-extensions {}
Install extensions on Windows with PowerShell:
cat extensions.txt |% { code --install-extension $_}
A:
https://code.visualstudio.com/docs/editor/extension-gallery#_workspace-recommended-extensions
A better way to share extension list is to create workspace-based extension set for your colleagues.
After generating a list of extensions via code --list-extensions | xargs -L 1 echo code --install-extension (check your $PATH contains Visual Studio Code entry C:\Program Files\Microsoft VS Code\bin\ before running code commands), run Extensions: Configure Recommended Extensions (Workspace Folder) Visual Studio Code command (Ctrl + Shift + P) and put extensions into the generated .vscode/extensions.json file:
{
"recommendations": [
"eg2.tslint",
"dbaeumer.vscode-eslint",
"msjsdiag.debugger-for-chrome"
]
}
A:
Generate a Windows command file (batch) for installing extensions:
for /F "tokens=*" %i in ('code --list-extensions')
do @echo call code --install-extension %i >> install.cmd
A:
Open the Visual Studio Code console and write:
code --list-extensions (or code-insiders --list-extensions if Visual Studio Code insider is installed)
Then share the command line with colleagues:
code --install-extension {ext1} --install-extension {ext2} --install-extension {extN} replacing {ext1}, {ext2}, ... , {extN} with the extension you listed
For Visual Studio Code insider: code-insiders --install-extension {ext1} ...
If they copy/paste it in Visual Studio Code command-line terminal, they'll install the shared extensions.
More information on command-line-extension-management.
A:
How to export your Visual Studio Code extensions from the terminal. Here is git for that. Maybe this helps somebody.
How to export your Visual Studio Code extensions from the terminal
Note: Unix-like systems only.
Export your extensions to a shell file:
code --list-extensions | sed -e 's/^/code --install-extension /' > my_vscode_extensions.sh
Verify your extensions installer file:
less my_vscode_extesions.sh
Install your extensions (optional)
Run your my_vscode_extensions.sh using a Bash command:
bash my_vscode_extensions.sh
A:
There is an Extension Manager extension, that may help. It seems to allow to install a set of extensions specified in the settings.json.
A:
Benny's answer on Windows with the Linux subsystem:
code --list-extensions | wsl xargs -L 1 echo code --install-extension
A:
code --list-extensions > list
sed -i 's/.*/\"&\",/' list
Copy contents of file list and add to file .vscode/extensions.json in the "recommendations" section.
If extensions.json doesn't exist then create a file with the following contents
{
"recommendations": [
// Add content of file list here
]
}
Share the extensions.json file and ask another user to add to the .vscode folder. Visual Studio Code will prompt for installation of extensions.
A:
Under windows typically I need to run
cd C:\Program Files\Microsoft VS Code\bin
code.cmd --list-extensions
What you don't do is run the code.exe directly under C:\Program Files\Microsoft VS Code\
A:
Now there's a feature still in preview that allows you to sign in with a Microsoft or GitHub account and have your settings synced without any additional extension. It's pretty simple and straigh-forward. You can learn more here.
The things you can sync.
A:
I opened the Visual Studio Code extensions folder and executed:
find * -maxdepth 2 -name "package.json" | xargs grep "name"
That gives you a list from which you can extract the extension names.
A:
If you intend to share workspace extensions configuration across a team, you should look into the Recommended Extensions feature of Visual Studio Code.
To generate this file, open the command pallet > Configure Recommended Extensions (Workspace Folder). From there, if you wanted to get all of your current extensions and put them in here, you could use the --list-extensions stuff mentioned in other answers, but add some AWK script to make it paste-able into a JSON array (you can get more or less advanced with this as you please - this is just a quick example):
code --list-extensions | awk '{ print "\""$0"\"\,"}'
The advantage of this method is that your team-wide workspace configuration can be checked into source control. With this file present in a project, when the project is opened Visual Studio Code will notify the user that there are recommended extensions to install (if they don't already have them) and can install them all with a single button press.
A:
For those that are wondering how to copy your extensions from Visual Studio Code to Visual Studio Code insiders, use this modification of Benny's answer:
code --list-extensions | xargs -L 1 echo code-insiders --install-extension
A:
If you would like to transfer all the extensions from code to code-insiders or vice versa, here is what worked for me from Git Bash.
code --list-extensions | xargs -L 1 code-insiders --install-extension
This will install all the missing extensions and skip the installed ones. After that, you will need to close and reopen Visual Studio Code.
Similarly, you can transfer extensions from code-insiders to code with the following:
code-insiders --list-extensions | xargs -L 1 code --install-extension
A:
On Windows, using a powershell script. This returns a "FriendlyName" which better reflects the name as it will appear in the Visual Studio Code UI (which code --list-extensions doesn't do):
<#
.SYNOPSIS
Lists installed Visual Studio Code Extensions in a friendly format.
.DESCRIPTION
Lists installed Visual Studio Code Extensions in a friendly format.
.INPUTS
None. You cannot pipe objects to this.
.OUTPUTS
A list of installed Visual Studio Code Extensions in a friendly format.
.EXAMPLE
PS> .\List-Extensions.ps1
.NOTES
Author: John Bentley
Version: 2022-07-11 18:55
Latest at: [How can you export the Visual Studio Code extension list?
> John Bentley's Answer]
(https://stackoverflow.com/a/72929878/872154)
#>
# Change to your directory
$packages = Get-ChildItem C:/Users/John/.vscode/extensions/*/package.json
Function Out-JsonItem($File) {
$json = Get-Content $File | ConvertFrom-Json
$extensionMetaData = [PSCustomObject]@{
# This is the name that appears in the Visual Studio Code UI,
# Extensions Tab
FriendlyName =
if ($json.displayName) { $json.displayName } else { $json.name }
# Name = $json.name
PublisherAndName = $json.publisher + '.' + $json.name
Description = $json.description
}
return $extensionMetaData
}
$extensions = ($packages | ForEach-Object { Out-JsonItem -File $_ })
$extensions.GetEnumerator() | Sort-Object {$_.FriendlyName}
# Alternate sorting (Same order as `code --list-extensions`).
# $extensions.GetEnumerator() | Sort-Object {$_.PublisherAndName}
Example output:
FriendlyName PublisherAndName
------------ ----------------
[Deprecated] Debugger for Chrome msjsdiag.debugger-for-chrome
%displayName% ms-vscode-remote.remote-wsl
Apache Conf mrmlnc.vscode-apache
Apache Conf Snippets eiminsasete.apacheconf-snippets
Auto Close Tag formulahendry.auto-close-tag
Beautify HookyQR.beautify
Blank Line Organizer rintoj.blank-line-organizer
change-case wmaurer.change-case
Character Count stevensona.character-count
...
Word Count ms-vscode.wordcount
Word Count OliverKovacs.word-count
XML redhat.vscode-xml
XML Tools DotJoshJohnson.xml
XML Tools qub.qub-xml-vscode
XPath Notebook for Visual Studio Code deltaxml.xpath-notebook
XSLT/XPath for Visual Studio Code deltaxml.xslt-xpath
Instructions:
Copy script code and save it to a file as List-Extensions.ps1 somewhere (Windows only).
Change the line $packages = Get-ChildItem C:/Users/John/.vscode/extensions/*/package.json to point to your directory (change the user name).
Open Windows Terminal at the directory where you saved List-Extensions.ps1 (the Set-Location command might help here).
Run the script from Windows Terminal with .\List-Extensions.ps1
A:
For Linux/Mac only, export the installed Visual Studio Code extensions in the form of an installation script. It's a Z shell (Zsh) script, but it may run in Bash as well.
https://gist.github.com/jvlad/6c92178bbfd1906b7d83c69780ee4630
A:
File > Preferences > Turn on Settings Sync Will sync your VS Code settings across different devices.
|
How can you export the Visual Studio Code extension list?
|
I need to send all my installed extensions to my colleagues. How can I export them?
The extension manager seems to do nothing... It won't install any extension.
|
[
"Automatic\nIf you are looking forward to an easy one-stop tool to do it for you, I would suggest you to look into the Settings Sync extension.\nIt will allow\n\nExport of your configuration and extensions\nShare it with coworkers and teams. You can update the configuration. Their settings will auto updated.\n\nManual\n\nMake sure you have the most current version of Visual Studio Code. If you install via a company portal, you might not have the most current version.\nOn machine A\nUnix:\ncode --list-extensions | xargs -L 1 echo code --install-extension\n\nWindows (PowerShell, e. g. using Visual Studio Code's integrated Terminal):\ncode --list-extensions | % { \"code --install-extension $_\" }\n\nCopy and paste the echo output to machine B\nSample output\ncode --install-extension Angular.ng-template\ncode --install-extension DSKWRK.vscode-generate-getter-setter\ncode --install-extension EditorConfig.EditorConfig\ncode --install-extension HookyQR.beautify\n\n\nPlease make sure you have the code command line installed. For more information, please visit Command Line Interface (CLI).\n",
"I've needed to do this myself a few times - especially when installing on another machine.\nCommon questions will give you the location of your folder\n\nVisual Studio Code looks for extensions under your extensions folder .vscode/extensions. Depending on your platform it is located:\n\nWindows %USERPROFILE%\\.vscode\\extensions\nMac ~/.vscode/extensions\nLinux ~/.vscode/extensions\n\nThat should show you a list of the extensions.\nI've also had success using Visual Studio Code Settings Sync Extension to sync settings to GitHub gist.\nIn the latest release of Visual Studio Code (May 2016), it is now possible to list the installed extensions on the command line:\ncode --list-extensions\n\n",
"I have developed an extension which will synchronise your all Visual Studio Code settings across multiple instances.\nKey Features\n\nUse your GitHub account token.\nEasy to upload and download on one click.\nSaves all settings and snippets files.\nUpload key: Shift + Alt + U\nDownload key: Shift + Alt + D\nType Sync In Order to View all sync options\n\nIt synchronises the\n\nSettings file\nKeybinding file\nLaunch file\nSnippets folder\nVisual Studio Code extensions\n\nDetail Documentation Source\nVisual Studio Code Sync ReadMe\nDownload here: Visual Studio Code Settings Sync\n",
"Windows (PowerShell) version of Benny's answer\nMachine A:\nIn the Visual Studio Code PowerShell terminal:\ncode --list-extensions > extensions.list\nMachine B:\n\nCopy extension.list to the machine B\n\nIn the Visual Studio Code PowerShell terminal:\n cat extensions.list |% { code --install-extension $_}\n\n\n\n",
"I used the following command to copy my extensions from Visual Studio Code to Visual Studio Code insiders:\ncode --list-extensions | xargs -L 1 code-insiders --install-extension\n\nThe argument -L 1 allows us to execute the command code-insiders --install-extension once for each input line generated by code --list-extensions.\n",
"For Linux\nOn the old machine:\ncode --list-extensions > vscode-extensions.list\n\nOn the new machine:\ncat vscode-extensions.list | xargs -L 1 code --install-extension\n\n",
"If using bash, you can use the following commands:\nExport extensions\ncode --list-extensions |\nxargs -L 1 echo code --install-extension |\nsed 's/$/ --force/' |\nsed '$!s/$/ \\&/' > install-extensions.sh\n\nWith bash alias:\n# eve - export vscode extensions\nalias eve=\"code --list-extensions |\nxargs -L 1 echo code --install-extension |\nsed 's/$/ --force/' |\nsed '\\$!s/$/ \\&/' > install-extensions.sh\"\n\nJust run eve.\nInstall extensions\nsh install-extensions.sh\n\n",
"Dump extensions:\ncode --list-extensions > extensions.txt\n\nInstall extensions with Bash (Linux, OS X and WSL):\ncat extensions.txt | xargs code --list-extensions {}\n\nInstall extensions on Windows with PowerShell:\ncat extensions.txt |% { code --install-extension $_}\n\n",
"\nhttps://code.visualstudio.com/docs/editor/extension-gallery#_workspace-recommended-extensions\n\nA better way to share extension list is to create workspace-based extension set for your colleagues.\nAfter generating a list of extensions via code --list-extensions | xargs -L 1 echo code --install-extension (check your $PATH contains Visual Studio Code entry C:\\Program Files\\Microsoft VS Code\\bin\\ before running code commands), run Extensions: Configure Recommended Extensions (Workspace Folder) Visual Studio Code command (Ctrl + Shift + P) and put extensions into the generated .vscode/extensions.json file:\n{\n \"recommendations\": [\n \"eg2.tslint\",\n \"dbaeumer.vscode-eslint\",\n \"msjsdiag.debugger-for-chrome\"\n ]\n}\n\n",
"Generate a Windows command file (batch) for installing extensions:\nfor /F \"tokens=*\" %i in ('code --list-extensions')\n do @echo call code --install-extension %i >> install.cmd\n\n",
"Open the Visual Studio Code console and write:\ncode --list-extensions (or code-insiders --list-extensions if Visual Studio Code insider is installed)\nThen share the command line with colleagues:\ncode --install-extension {ext1} --install-extension {ext2} --install-extension {extN} replacing {ext1}, {ext2}, ... , {extN} with the extension you listed\nFor Visual Studio Code insider: code-insiders --install-extension {ext1} ...\nIf they copy/paste it in Visual Studio Code command-line terminal, they'll install the shared extensions.\nMore information on command-line-extension-management.\n",
"How to export your Visual Studio Code extensions from the terminal. Here is git for that. Maybe this helps somebody.\nHow to export your Visual Studio Code extensions from the terminal\nNote: Unix-like systems only.\n\nExport your extensions to a shell file:\n\ncode --list-extensions | sed -e 's/^/code --install-extension /' > my_vscode_extensions.sh\n\n\nVerify your extensions installer file:\n\nless my_vscode_extesions.sh\n\nInstall your extensions (optional)\nRun your my_vscode_extensions.sh using a Bash command:\nbash my_vscode_extensions.sh\n\n",
"There is an Extension Manager extension, that may help. It seems to allow to install a set of extensions specified in the settings.json.\n",
"Benny's answer on Windows with the Linux subsystem:\ncode --list-extensions | wsl xargs -L 1 echo code --install-extension\n\n",
"\ncode --list-extensions > list\n\nsed -i 's/.*/\\\"&\\\",/' list\n\nCopy contents of file list and add to file .vscode/extensions.json in the \"recommendations\" section.\n\nIf extensions.json doesn't exist then create a file with the following contents\n{\n \"recommendations\": [\n // Add content of file list here\n ]\n}\n\n\nShare the extensions.json file and ask another user to add to the .vscode folder. Visual Studio Code will prompt for installation of extensions.\n\n\n",
"Under windows typically I need to run\ncd C:\\Program Files\\Microsoft VS Code\\bin\ncode.cmd --list-extensions\n\nWhat you don't do is run the code.exe directly under C:\\Program Files\\Microsoft VS Code\\\n",
"Now there's a feature still in preview that allows you to sign in with a Microsoft or GitHub account and have your settings synced without any additional extension. It's pretty simple and straigh-forward. You can learn more here.\nThe things you can sync.\n",
"I opened the Visual Studio Code extensions folder and executed:\nfind * -maxdepth 2 -name \"package.json\" | xargs grep \"name\"\n\nThat gives you a list from which you can extract the extension names.\n",
"If you intend to share workspace extensions configuration across a team, you should look into the Recommended Extensions feature of Visual Studio Code.\nTo generate this file, open the command pallet > Configure Recommended Extensions (Workspace Folder). From there, if you wanted to get all of your current extensions and put them in here, you could use the --list-extensions stuff mentioned in other answers, but add some AWK script to make it paste-able into a JSON array (you can get more or less advanced with this as you please - this is just a quick example):\ncode --list-extensions | awk '{ print \"\\\"\"$0\"\\\"\\,\"}'\nThe advantage of this method is that your team-wide workspace configuration can be checked into source control. With this file present in a project, when the project is opened Visual Studio Code will notify the user that there are recommended extensions to install (if they don't already have them) and can install them all with a single button press.\n",
"For those that are wondering how to copy your extensions from Visual Studio Code to Visual Studio Code insiders, use this modification of Benny's answer:\ncode --list-extensions | xargs -L 1 echo code-insiders --install-extension\n\n",
"If you would like to transfer all the extensions from code to code-insiders or vice versa, here is what worked for me from Git Bash.\ncode --list-extensions | xargs -L 1 code-insiders --install-extension\n\nThis will install all the missing extensions and skip the installed ones. After that, you will need to close and reopen Visual Studio Code.\nSimilarly, you can transfer extensions from code-insiders to code with the following:\ncode-insiders --list-extensions | xargs -L 1 code --install-extension\n\n",
"On Windows, using a powershell script. This returns a \"FriendlyName\" which better reflects the name as it will appear in the Visual Studio Code UI (which code --list-extensions doesn't do):\n<#\n.SYNOPSIS \n Lists installed Visual Studio Code Extensions in a friendly format.\n\n.DESCRIPTION\n Lists installed Visual Studio Code Extensions in a friendly format.\n \n.INPUTS\n None. You cannot pipe objects to this.\n\n.OUTPUTS\n A list of installed Visual Studio Code Extensions in a friendly format.\n\n.EXAMPLE\n PS> .\\List-Extensions.ps1 \n\n.NOTES\n Author: John Bentley\n Version: 2022-07-11 18:55\n Latest at: [How can you export the Visual Studio Code extension list?\n > John Bentley's Answer]\n (https://stackoverflow.com/a/72929878/872154)\n#>\n\n# Change to your directory\n$packages = Get-ChildItem C:/Users/John/.vscode/extensions/*/package.json \n\nFunction Out-JsonItem($File) { \n $json = Get-Content $File | ConvertFrom-Json \n $extensionMetaData = [PSCustomObject]@{\n\n # This is the name that appears in the Visual Studio Code UI, \n # Extensions Tab\n FriendlyName = \n if ($json.displayName) { $json.displayName } else { $json.name }\n \n # Name = $json.name\n PublisherAndName = $json.publisher + '.' + $json.name\n\n Description = $json.description\n }\n return $extensionMetaData\n}\n\n$extensions = ($packages | ForEach-Object { Out-JsonItem -File $_ }) \n\n$extensions.GetEnumerator() | Sort-Object {$_.FriendlyName} \n\n# Alternate sorting (Same order as `code --list-extensions`). \n# $extensions.GetEnumerator() | Sort-Object {$_.PublisherAndName} \n\nExample output:\nFriendlyName PublisherAndName\n------------ ----------------\n[Deprecated] Debugger for Chrome msjsdiag.debugger-for-chrome\n%displayName% ms-vscode-remote.remote-wsl\nApache Conf mrmlnc.vscode-apache\nApache Conf Snippets eiminsasete.apacheconf-snippets\nAuto Close Tag formulahendry.auto-close-tag\nBeautify HookyQR.beautify\nBlank Line Organizer rintoj.blank-line-organizer\nchange-case wmaurer.change-case\nCharacter Count stevensona.character-count\n...\nWord Count ms-vscode.wordcount\nWord Count OliverKovacs.word-count\nXML redhat.vscode-xml\nXML Tools DotJoshJohnson.xml\nXML Tools qub.qub-xml-vscode\nXPath Notebook for Visual Studio Code deltaxml.xpath-notebook\nXSLT/XPath for Visual Studio Code deltaxml.xslt-xpath \n\nInstructions:\n\nCopy script code and save it to a file as List-Extensions.ps1 somewhere (Windows only).\nChange the line $packages = Get-ChildItem C:/Users/John/.vscode/extensions/*/package.json to point to your directory (change the user name).\nOpen Windows Terminal at the directory where you saved List-Extensions.ps1 (the Set-Location command might help here).\nRun the script from Windows Terminal with .\\List-Extensions.ps1\n\n",
"For Linux/Mac only, export the installed Visual Studio Code extensions in the form of an installation script. It's a Z shell (Zsh) script, but it may run in Bash as well.\nhttps://gist.github.com/jvlad/6c92178bbfd1906b7d83c69780ee4630\n",
"\nFile > Preferences > Turn on Settings Sync Will sync your VS Code settings across different devices.\n\n"
] |
[
931,
247,
62,
59,
29,
23,
18,
16,
13,
12,
8,
7,
4,
4,
3,
3,
3,
2,
1,
1,
1,
1,
0,
0
] |
[] |
[] |
[
"visual_studio_code"
] |
stackoverflow_0035773299_visual_studio_code.txt
|
Q:
django.db.utils.IntegrityError: (1062, "Duplicate entry '8d4d1c76950748619f93ee2bfffc7de5' for key 'request_id'")
I don't understand what kind of error is ? sometimes this code works and after 1-2 times submitting form then trying to submit form again with different details then i got this error,
django.db.utils.IntegrityError: (1062, "Duplicate entry '8d4d1c76950748619f93ee2bfffc7de5' for key 'request_id'")
Here this is my views.py code
@api_view(['POST', 'GET'])
def add_info_view(request):
if request.method == 'POST':
form = GitInfoForm(request.POST)
if form.is_valid():
form.save()
try:
git_Id = form.cleaned_data['git_Id']
s = Gitinformation.objects.filter(git_Id=git_Id).values('request_id')
print('Value of S :', s[0]['request_id'])
s = s[0]['request_id']
approve_url = f"http://127.0.0.1:8000/Approve/?request_id={str(s)}"
print("Url : ", approve_url)
try:
send_mail(
'KSA Test Activation',
approve_url,
'[email protected]',
['[email protected]'],
fail_silently=False,
)
request.session['approve_url'] = approve_url
print('Approve Url sent : ', approve_url)
except Exception as e:
pass
except Exception as e:
pass
form = GitInfoForm()
form = GitInfoForm()
return render(request, 'requestApp/addInfo.html', {'form': form})
How to getrid of this error, please help me.
A:
Based on your comment.
request_id = models.UUIDField(primary_key=False, default=uuid.uuid4().hex, editable=False, unique=True)
You assigned an instance of the UUID for the default value. In fact, you didn't set a function to generate a new UUID for each record. If you check the related migration file, you can see a line like this:
('request_id', models.UUIDField(default='97c8a76eefe8445081fcfec3af4f1df2', editable=False, unique=True))
you set an instance of the UUID as default but the unique property is set, because of this, the first time you save a record is ok but the next record, you face with error.
Actually, you have to set a function for the default to run for each record. You should set the properties like the below:
request_id = models.UUIDField(primary_key=False, default=uuid.uuid4, editable=False, unique=True)
|
django.db.utils.IntegrityError: (1062, "Duplicate entry '8d4d1c76950748619f93ee2bfffc7de5' for key 'request_id'")
|
I don't understand what kind of error is ? sometimes this code works and after 1-2 times submitting form then trying to submit form again with different details then i got this error,
django.db.utils.IntegrityError: (1062, "Duplicate entry '8d4d1c76950748619f93ee2bfffc7de5' for key 'request_id'")
Here this is my views.py code
@api_view(['POST', 'GET'])
def add_info_view(request):
if request.method == 'POST':
form = GitInfoForm(request.POST)
if form.is_valid():
form.save()
try:
git_Id = form.cleaned_data['git_Id']
s = Gitinformation.objects.filter(git_Id=git_Id).values('request_id')
print('Value of S :', s[0]['request_id'])
s = s[0]['request_id']
approve_url = f"http://127.0.0.1:8000/Approve/?request_id={str(s)}"
print("Url : ", approve_url)
try:
send_mail(
'KSA Test Activation',
approve_url,
'[email protected]',
['[email protected]'],
fail_silently=False,
)
request.session['approve_url'] = approve_url
print('Approve Url sent : ', approve_url)
except Exception as e:
pass
except Exception as e:
pass
form = GitInfoForm()
form = GitInfoForm()
return render(request, 'requestApp/addInfo.html', {'form': form})
How to getrid of this error, please help me.
|
[
"Based on your comment.\nrequest_id = models.UUIDField(primary_key=False, default=uuid.uuid4().hex, editable=False, unique=True)\n\nYou assigned an instance of the UUID for the default value. In fact, you didn't set a function to generate a new UUID for each record. If you check the related migration file, you can see a line like this:\n('request_id', models.UUIDField(default='97c8a76eefe8445081fcfec3af4f1df2', editable=False, unique=True))\n\nyou set an instance of the UUID as default but the unique property is set, because of this, the first time you save a record is ok but the next record, you face with error.\nActually, you have to set a function for the default to run for each record. You should set the properties like the below:\nrequest_id = models.UUIDField(primary_key=False, default=uuid.uuid4, editable=False, unique=True)\n\n"
] |
[
0
] |
[] |
[] |
[
"django",
"django_rest_framework",
"django_views",
"mysql",
"python"
] |
stackoverflow_0074652528_django_django_rest_framework_django_views_mysql_python.txt
|
Q:
Java Feign Fallback class
I am using SpringBoot version 1.5.9.
I can’t understand why my Fallback class doesn’t work out.
Maybe I'm doing something wrong?
My Feign client:
@FeignClient(
name = "prices",
url = "${prices.url}",
configuration = MyFeignConfig.class,
fallbackFactory = FallbackClass.class
)
public interface PricesFeignClient {
@GetMapping("/{userId}")
PriceModel get(
@PathVariable("userId") String userId
);
}
Here is the fallback class:
@Component
public class FallbackClass implements FallbackFactory<PricesFeignClient> {
@Override
public PricesFeignClient create(Throwable cause) {
return new PricesFeignClient() {
@Override
public PriceModel get(String userId) {
System.out.println("LALALA");
return null;
}
};
}
}
In theory, my fallback method should work out if my Feign client returns an error.
Here in the Feign client in the files in prices.url I specified the wrong URL (simulated the situation that my remote service to which I am making a call is unavailable). Knowing my Feign client should return with an error and the Fallback class should be called in which in the console I should receive the message: "LALALA".
This message is not in the console: my Fallback class is not being called. Instead, I get an error stating that the requested resource was not found.
Please tell me what could be the problem? Can I make a mistake somewhere?
The thing is that now I'm trying to get my Fallback class to work. And then I want to call another Fagnet class in the Fallback class with a different URL so that it works out if my main service is unavailable.
Tell me, please. thanks
A:
I had to add to dependencies this for it to work (also don't forget to insert feign.hystrix.enabled: true as was said in the comments)
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-netflix-hystrix</artifactId>
<version>2.2.10.RELEASE</version>
</dependency>
|
Java Feign Fallback class
|
I am using SpringBoot version 1.5.9.
I can’t understand why my Fallback class doesn’t work out.
Maybe I'm doing something wrong?
My Feign client:
@FeignClient(
name = "prices",
url = "${prices.url}",
configuration = MyFeignConfig.class,
fallbackFactory = FallbackClass.class
)
public interface PricesFeignClient {
@GetMapping("/{userId}")
PriceModel get(
@PathVariable("userId") String userId
);
}
Here is the fallback class:
@Component
public class FallbackClass implements FallbackFactory<PricesFeignClient> {
@Override
public PricesFeignClient create(Throwable cause) {
return new PricesFeignClient() {
@Override
public PriceModel get(String userId) {
System.out.println("LALALA");
return null;
}
};
}
}
In theory, my fallback method should work out if my Feign client returns an error.
Here in the Feign client in the files in prices.url I specified the wrong URL (simulated the situation that my remote service to which I am making a call is unavailable). Knowing my Feign client should return with an error and the Fallback class should be called in which in the console I should receive the message: "LALALA".
This message is not in the console: my Fallback class is not being called. Instead, I get an error stating that the requested resource was not found.
Please tell me what could be the problem? Can I make a mistake somewhere?
The thing is that now I'm trying to get my Fallback class to work. And then I want to call another Fagnet class in the Fallback class with a different URL so that it works out if my main service is unavailable.
Tell me, please. thanks
|
[
"I had to add to dependencies this for it to work (also don't forget to insert feign.hystrix.enabled: true as was said in the comments)\n<dependency>\n <groupId>org.springframework.cloud</groupId>\n <artifactId>spring-cloud-starter-netflix-hystrix</artifactId>\n <version>2.2.10.RELEASE</version>\n</dependency>\n\n"
] |
[
0
] |
[] |
[] |
[
"fallback",
"feign",
"java",
"spring_boot"
] |
stackoverflow_0061129587_fallback_feign_java_spring_boot.txt
|
Q:
How do I use row values as the name of the columns in R?
I have two data in R. the first one has one row(obs) and 700 variables (columns)
I have almost 700 ingredients, so it looks like this:
1 2 3 4 .....
Cookbooks ham flour oil ... ......
I would like to make the ingredients the variable names. So that it looks like this (just column names)
Cookbooks ham flour oil ... ......
After that, I would like to merge this data in R with another data that I have that contains this:
Author Pubyear Title
Simon Red 1992 Carbonara
Alex white 1980 roast chicken
........ ..... ............
It goes on. The final results should look like this with he columns from data set 1 containing ingredients attached on the second dataset. This is my first question so I do apologise if I have done something wrong, happy to learn. thanks everyone
Author PubYear Title ham flour oil ..... ......
Simon Red 1992 Carbonara 0 0 0
Alex white 1980 roast chicken 0 0 1 ..... ......
........ ..... ............
I tried a bunch of codes I found in here but I always get an error message.
I am new to R, any help would be greatly appreciated
A:
I'm not assuming your ingredients table guarantees that ham (for instance) is always in column 2 across all rows. To work with this, I'll expand your sample a little
ingredients <- read.table(text="
1 2 3 4
Cookbooks ham flour oil ...
Cookbooks2 flour egg ... ...")
where dots (here) mean "not included".
library(dplyr)
library(tidyr)
tibble::rownames_to_column(ingredients, "Cookbook") %>%
pivot_longer(-Cookbook) %>%
select(-name) %>%
mutate(z = 1L) %>%
filter(grepl("\\w", value)) %>%
pivot_wider(Cookbook, names_from = "value", values_from = "z", values_fill = list(z = 0))
# # A tibble: 2 x 5
# Cookbook ham flour oil egg
# <chr> <int> <int> <int> <int>
# 1 Cookbooks 1 1 1 0
# 2 Cookbooks2 0 1 0 1
|
How do I use row values as the name of the columns in R?
|
I have two data in R. the first one has one row(obs) and 700 variables (columns)
I have almost 700 ingredients, so it looks like this:
1 2 3 4 .....
Cookbooks ham flour oil ... ......
I would like to make the ingredients the variable names. So that it looks like this (just column names)
Cookbooks ham flour oil ... ......
After that, I would like to merge this data in R with another data that I have that contains this:
Author Pubyear Title
Simon Red 1992 Carbonara
Alex white 1980 roast chicken
........ ..... ............
It goes on. The final results should look like this with he columns from data set 1 containing ingredients attached on the second dataset. This is my first question so I do apologise if I have done something wrong, happy to learn. thanks everyone
Author PubYear Title ham flour oil ..... ......
Simon Red 1992 Carbonara 0 0 0
Alex white 1980 roast chicken 0 0 1 ..... ......
........ ..... ............
I tried a bunch of codes I found in here but I always get an error message.
I am new to R, any help would be greatly appreciated
|
[
"I'm not assuming your ingredients table guarantees that ham (for instance) is always in column 2 across all rows. To work with this, I'll expand your sample a little\ningredients <- read.table(text=\"\n 1 2 3 4\nCookbooks ham flour oil ...\nCookbooks2 flour egg ... ...\")\n\nwhere dots (here) mean \"not included\".\nlibrary(dplyr)\nlibrary(tidyr)\ntibble::rownames_to_column(ingredients, \"Cookbook\") %>%\n pivot_longer(-Cookbook) %>%\n select(-name) %>%\n mutate(z = 1L) %>%\n filter(grepl(\"\\\\w\", value)) %>%\n pivot_wider(Cookbook, names_from = \"value\", values_from = \"z\", values_fill = list(z = 0))\n# # A tibble: 2 x 5\n# Cookbook ham flour oil egg\n# <chr> <int> <int> <int> <int>\n# 1 Cookbooks 1 1 1 0\n# 2 Cookbooks2 0 1 0 1\n\n"
] |
[
0
] |
[] |
[] |
[
"merge",
"r"
] |
stackoverflow_0074658357_merge_r.txt
|
Q:
I would like to understand the logic behind the output
#include<stdio.h>
int main()
{
int value = 0 ;
if(value)
printf("0");
printf("1");
printf("2");
return 0;
}
The output of the above code is 12
but when I tweak the code by adding curly brackets the output differs
#include<stdio.h>
int main()
{
int value = 0 ;
if(value)
{
printf("0\n");
printf("1\n");
printf("2\n");
}
return 0;
}
After adding curly brackets I didn't get an output.
When I change the declared variable to 1 I expected the program to only output the line printf("2") because when the value = 0 it gave 12 as the output excluding the first printf statment, So I expected changing the assigned variable value = 1 as the output would exclude both the first and second printf statments, but it didn't. This made me more confused.
Summary:
If there is no curly bracket{} in the code it gives a different output for the same code with curly brackets
When I declare value=1 or any other number program prints 012(in both codes).
I would like to know why is this happening.
Thank you.
A:
When you see
if ( condition )
stuff
it means "if condition is true, do stuff". But you have to be clear on what the condition is, and what the stuff is.
In C, the condition can be any expression, any expression at all. It can be a number like 0, or a variable like value, or a more complicated expression like i > 1 && i < 10. And, super important: the interpretation of the value is that 0 means "false", and anything else — any other value at all — means "true".
And then, what about the stuff? You can either have one statement, or several statements enclosed in curly braces { }.
If you have one statement, the if( condition ) part controls whether you do that one statement or not. If you have several statements enclosed in { }, the if( condition ) part controls whether you do that whole block or not.
So when you said
if(value)
printf("0");
printf("1");
printf("2");
you had one statement — just one — controlled by the if. If value was 0, it would not print 0.
If value was anything other than 0, it would print 0.
And then, it would always print 2 and 3, no matter what.
That's confusing to look at, which is why people always say that proper indentation is important — if you had written it as
if(value)
printf("0");
printf("1");
printf("2");
then the indentation would have accurately suggested the actual control flow.
And then when you said
if(value)
{
printf("0\n");
printf("1\n");
printf("2\n");
}
now all three statements are in a block, so they're all controlled by the if.
If value was 0, it won't print anything.
If value was anything other than 0, it will print all three.
A:
In c an int can be evaluated true if the value is not 0 otherwise it is ecaluated as false. Additionally as mentioned in the comments, if statements without brakets only apply to the next statement, which is why your output differs there.
A:
The code
if(value)
printf("0");
printf("1");
printf("2");
is interpreted as
if(value)
{
printf("0");
}
printf("1");
printf("2");
printf("0"); will be executed if value is non-zero; printf("1"); and printf("2"); will be executed unconditionally.
In the code
if(value)
{
printf("0\n");
printf("1\n");
printf("2\n");
}
all three printf statements will be executed only if value is non-zero.
|
I would like to understand the logic behind the output
|
#include<stdio.h>
int main()
{
int value = 0 ;
if(value)
printf("0");
printf("1");
printf("2");
return 0;
}
The output of the above code is 12
but when I tweak the code by adding curly brackets the output differs
#include<stdio.h>
int main()
{
int value = 0 ;
if(value)
{
printf("0\n");
printf("1\n");
printf("2\n");
}
return 0;
}
After adding curly brackets I didn't get an output.
When I change the declared variable to 1 I expected the program to only output the line printf("2") because when the value = 0 it gave 12 as the output excluding the first printf statment, So I expected changing the assigned variable value = 1 as the output would exclude both the first and second printf statments, but it didn't. This made me more confused.
Summary:
If there is no curly bracket{} in the code it gives a different output for the same code with curly brackets
When I declare value=1 or any other number program prints 012(in both codes).
I would like to know why is this happening.
Thank you.
|
[
"When you see\nif ( condition )\n stuff\n\nit means \"if condition is true, do stuff\". But you have to be clear on what the condition is, and what the stuff is.\nIn C, the condition can be any expression, any expression at all. It can be a number like 0, or a variable like value, or a more complicated expression like i > 1 && i < 10. And, super important: the interpretation of the value is that 0 means \"false\", and anything else — any other value at all — means \"true\".\nAnd then, what about the stuff? You can either have one statement, or several statements enclosed in curly braces { }.\nIf you have one statement, the if( condition ) part controls whether you do that one statement or not. If you have several statements enclosed in { }, the if( condition ) part controls whether you do that whole block or not.\nSo when you said\nif(value)\nprintf(\"0\");\nprintf(\"1\");\nprintf(\"2\");\n\nyou had one statement — just one — controlled by the if. If value was 0, it would not print 0.\nIf value was anything other than 0, it would print 0.\nAnd then, it would always print 2 and 3, no matter what.\nThat's confusing to look at, which is why people always say that proper indentation is important — if you had written it as\nif(value)\n printf(\"0\");\nprintf(\"1\");\nprintf(\"2\");\n\nthen the indentation would have accurately suggested the actual control flow.\nAnd then when you said\nif(value)\n {\n printf(\"0\\n\");\n printf(\"1\\n\");\n printf(\"2\\n\");\n }\n\nnow all three statements are in a block, so they're all controlled by the if.\nIf value was 0, it won't print anything.\nIf value was anything other than 0, it will print all three.\n",
"In c an int can be evaluated true if the value is not 0 otherwise it is ecaluated as false. Additionally as mentioned in the comments, if statements without brakets only apply to the next statement, which is why your output differs there.\n",
"The code\nif(value)\n\nprintf(\"0\");\nprintf(\"1\");\nprintf(\"2\");\n\nis interpreted as\nif(value)\n{\n printf(\"0\");\n}\n\nprintf(\"1\");\nprintf(\"2\");\n\nprintf(\"0\"); will be executed if value is non-zero; printf(\"1\"); and printf(\"2\"); will be executed unconditionally.\nIn the code\nif(value)\n{\nprintf(\"0\\n\");\nprintf(\"1\\n\");\nprintf(\"2\\n\");\n}\n\nall three printf statements will be executed only if value is non-zero.\n"
] |
[
3,
1,
0
] |
[] |
[] |
[
"c",
"curly_braces",
"if_statement"
] |
stackoverflow_0074644851_c_curly_braces_if_statement.txt
|
Q:
Kubernetes ssh into nodes not working in local
How to ssh to the node inside the cluster in local. I am using docker edge version which has kubernetes inbuilt. If i run
kubectl ssh node
I am getting
Error: unknown command "ssh" for "kubectl"
Did you mean this?
set
Run 'kubectl --help' for usage.
error: unknown command "ssh" for "kubectl"
Did you mean this?
set
A:
There is no "ssh" command in kubectl yet, but there are plenty of options to access Kubernetes node shell.
In case you are using cloud provider, you are able to connect to nodes directly from instances management interface.
For example, in GCP: Select Menu -> Compute Engine -> VM instances, then press SSH button on the left side of the desired node instance.
In case of using local VM (VMWare, Virtualbox), you can configure sshd before rolling out Kubernetes cluster, or use VM console, which is available from management GUI.
Vagrant provides its own command to access VMs - vagrant ssh
In case of using minikube, there is minikube ssh command to connect to minikube VM. There are also other options.
I found no simple way to access docker-for-desktop VM, but you can easily switch to minikube for experimenting with node settings.
A:
How to ssh to the node inside the cluster in local
Kubernetes is aware of nodes on level of secure communication with kubelets on nodes (geting hostname and ip from node), and as such, does not provide cluster-level ssh to nodes out of the box. Depending on your actual provide/setup there are different ways of connecting to nodes and they all boil down to locate your ssh key, open appropriate ports on firewall/security groups and issue ssh -i key user@node_instance_ip command to access node. If you are running locally with virtual machines you can setup your own ssh keypairs and do the trick..
A:
You can effectively shell into a pod using exec(I know its not exactly what the question asks, but might be helpful).
An example usage would be kubectl exec -it name-of-your-pod -- /bin/bash. assuming you have bash installed.
Hope that helps.
A:
You have to first Extend kubectl with plugins adding https://github.com/luksa/kubectl-plugins.
Basically, to "install" ssh, e.g.:
wget https://raw.githubusercontent.com/luksa/kubectl-plugins/master/kubectl-ssh
Then make sure the file is in kubectl-ssh your path.
|
Kubernetes ssh into nodes not working in local
|
How to ssh to the node inside the cluster in local. I am using docker edge version which has kubernetes inbuilt. If i run
kubectl ssh node
I am getting
Error: unknown command "ssh" for "kubectl"
Did you mean this?
set
Run 'kubectl --help' for usage.
error: unknown command "ssh" for "kubectl"
Did you mean this?
set
|
[
"There is no \"ssh\" command in kubectl yet, but there are plenty of options to access Kubernetes node shell.\nIn case you are using cloud provider, you are able to connect to nodes directly from instances management interface.\nFor example, in GCP: Select Menu -> Compute Engine -> VM instances, then press SSH button on the left side of the desired node instance.\nIn case of using local VM (VMWare, Virtualbox), you can configure sshd before rolling out Kubernetes cluster, or use VM console, which is available from management GUI.\nVagrant provides its own command to access VMs - vagrant ssh\nIn case of using minikube, there is minikube ssh command to connect to minikube VM. There are also other options.\nI found no simple way to access docker-for-desktop VM, but you can easily switch to minikube for experimenting with node settings.\n",
"\nHow to ssh to the node inside the cluster in local\n\n\nKubernetes is aware of nodes on level of secure communication with kubelets on nodes (geting hostname and ip from node), and as such, does not provide cluster-level ssh to nodes out of the box. Depending on your actual provide/setup there are different ways of connecting to nodes and they all boil down to locate your ssh key, open appropriate ports on firewall/security groups and issue ssh -i key user@node_instance_ip command to access node. If you are running locally with virtual machines you can setup your own ssh keypairs and do the trick..\n\n",
"You can effectively shell into a pod using exec(I know its not exactly what the question asks, but might be helpful). \nAn example usage would be kubectl exec -it name-of-your-pod -- /bin/bash. assuming you have bash installed. \nHope that helps.\n",
"You have to first Extend kubectl with plugins adding https://github.com/luksa/kubectl-plugins.\nBasically, to \"install\" ssh, e.g.:\nwget https://raw.githubusercontent.com/luksa/kubectl-plugins/master/kubectl-ssh\n\nThen make sure the file is in kubectl-ssh your path.\n"
] |
[
1,
0,
0,
0
] |
[] |
[] |
[
"kubernetes"
] |
stackoverflow_0050246419_kubernetes.txt
|
Q:
How to use Angular Module Federation to import an entire app
I'm fairly new to Ng Module Federation, but I think I have the gist of it. My issue is that most of the references I've seen either import a single component, or a module that is lazy-loaded in the router. I would like to include an entire mini-app in a page. The general idea is to have iFrame-like behavior without an iFrame.
I have the source app exposing the app.module.ts file and this seems to be working. However, I can't figure out the syntax to import this module and use it as a component within an existing component.
I tried adding loadRemoteModule({...}) to the imports of the module that has the component that will use the nested "app view". But this is an asynchronous function, for one and two, I don't know what to do next.
Does anyone know how to import a module and use its components?
A:
I figured it out and will try to summarize here:
Set up the webpack for Module Federation for the Module you wish to expose "as normal". This is where you will get the exposed URL to use below. Reference here: https://module-federation.github.io/blog/get-started#:~:text=%3E%20yarn%20dev-,Start,-Federating
In the shell app (the one to consume the federated module):
create the "remotes:" section with a name for the remote module and the URL.
remotes: {
'myMicroFE@https://mywebsite.com/myMicroFE.js'
}
Create the parent component that will house the MicroFE with something like this in the template:
<app-micro-fe-component></app-micro-fe-component>
and similar code in the .ts file:
async ngOnInit() {
const appModule: NgModule = await loadRemoteModule(
{
remoteEntry: 'https://mywebsite.com/myMicroFE.js',
remoteName: 'myMicroFE',
exposedModule: './AppModule',
}
);
const appModuleRef: NgModuleRef<any> = createNgModuleRef(
appModule['AppModule'],
this.injector
);
const microFEComponent = this.vcref.createComponent(
appModuleRef.instance.getComponent()
);
// Sample interaction with the component.
this.renderer.listen('window', 'message', (event) => {
if (event.data && event.data.providerId) {
this.microFEClicked.emit(event.data);
}
});
...
Obviously, names like MyMicroFE are here for readability, you'd use your own names.
|
How to use Angular Module Federation to import an entire app
|
I'm fairly new to Ng Module Federation, but I think I have the gist of it. My issue is that most of the references I've seen either import a single component, or a module that is lazy-loaded in the router. I would like to include an entire mini-app in a page. The general idea is to have iFrame-like behavior without an iFrame.
I have the source app exposing the app.module.ts file and this seems to be working. However, I can't figure out the syntax to import this module and use it as a component within an existing component.
I tried adding loadRemoteModule({...}) to the imports of the module that has the component that will use the nested "app view". But this is an asynchronous function, for one and two, I don't know what to do next.
Does anyone know how to import a module and use its components?
|
[
"I figured it out and will try to summarize here:\n\nSet up the webpack for Module Federation for the Module you wish to expose \"as normal\". This is where you will get the exposed URL to use below. Reference here: https://module-federation.github.io/blog/get-started#:~:text=%3E%20yarn%20dev-,Start,-Federating\n\nIn the shell app (the one to consume the federated module):\n\ncreate the \"remotes:\" section with a name for the remote module and the URL.\n remotes: {\n 'myMicroFE@https://mywebsite.com/myMicroFE.js'\n }\n\n\n\n\nCreate the parent component that will house the MicroFE with something like this in the template:\n <app-micro-fe-component></app-micro-fe-component>\n\nand similar code in the .ts file:\n async ngOnInit() {\n const appModule: NgModule = await loadRemoteModule(\n {\n remoteEntry: 'https://mywebsite.com/myMicroFE.js',\n remoteName: 'myMicroFE',\n exposedModule: './AppModule',\n }\n );\n const appModuleRef: NgModuleRef<any> = createNgModuleRef(\n appModule['AppModule'],\n this.injector\n );\n const microFEComponent = this.vcref.createComponent(\n appModuleRef.instance.getComponent()\n );\n// Sample interaction with the component.\nthis.renderer.listen('window', 'message', (event) => {\n if (event.data && event.data.providerId) {\n this.microFEClicked.emit(event.data);\n }\n });\n ...\n\n\n\nObviously, names like MyMicroFE are here for readability, you'd use your own names.\n"
] |
[
1
] |
[] |
[] |
[
"angular",
"angular_module",
"angular_module_federation",
"webpack_module_federation"
] |
stackoverflow_0071505503_angular_angular_module_angular_module_federation_webpack_module_federation.txt
|
Q:
Terraform AWS: SQS destination for Lambda doesn't get added
I have a working AWS project that I'm trying to implement in Terraform.
One of the steps requires a lambda function to query athena and return results to SQS (I am using this module for lambda instead of the original resource). Here is the code:
data "archive_file" "go_package" {
type = "zip"
source_file = "./report_to_SQS_go/main"
output_path = "./report_to_SQS_go/main.zip"
}
resource "aws_sqs_queue" "emails_queue" {
name = "sendEmails_tf"
}
module "lambda_report_to_sqs" {
source = "terraform-aws-modules/lambda/aws"
function_name = "report_to_SQS_Go_tf"
handler = "main"
runtime = "go1.x"
create_package = false
local_existing_package = "./report_to_SQS_go/main.zip"
attach_policy_json = true
policy_json = jsonencode({
Version = "2012-10-17"
Statement = [
{
Effect : "Allow"
Action : [
"dynamodb:*",
"lambda:*",
"logs:*",
"athena:*",
"cloudwatch:*",
"s3:*",
"sqs:*"
]
Resource : ["*"]
}
]
})
destination_on_success = aws_sqs_queue.emails_queue.arn
timeout = 200
memory_size = 1024
}
The code works fine and produces the desired output; however, the problem is, SQS doesn't show up as a destination (although the Queue shows up in SQS normally and can send/recieve messages).
I don't think permissions are the problem because I can add SQS destinations manually from the console successfully.
A:
The variable destination_on_success is only used if you also set create_async_event_config as true. Below is extracted from https://github.com/terraform-aws-modules/terraform-aws-lambda/blob/master
variables.tf
############################
# Lambda Async Event Config
############################
variable "create_async_event_config" {
description = "Controls whether async event configuration for Lambda Function/Alias should be created"
type = bool
default = false
}
variable "create_current_version_async_event_config" {
description = "Whether to allow async event configuration on current version of Lambda Function (this will revoke permissions from previous version because Terraform manages only current resources)"
type = bool
default = true
}
.....
variable "destination_on_failure" {
description = "Amazon Resource Name (ARN) of the destination resource for failed asynchronous invocations"
type = string
default = null
}
variable "destination_on_success" {
description = "Amazon Resource Name (ARN) of the destination resource for successful asynchronous invocations"
type = string
default = null
}
main.tf
resource "aws_lambda_function_event_invoke_config" "this" {
for_each = { for k, v in local.qualifiers : k => v if v != null && local.create && var.create_function && !var.create_layer && var.create_async_event_config }
function_name = aws_lambda_function.this[0].function_name
qualifier = each.key == "current_version" ? aws_lambda_function.this[0].version : null
maximum_event_age_in_seconds = var.maximum_event_age_in_seconds
maximum_retry_attempts = var.maximum_retry_attempts
dynamic "destination_config" {
for_each = var.destination_on_failure != null || var.destination_on_success != null ? [true] : []
content {
dynamic "on_failure" {
for_each = var.destination_on_failure != null ? [true] : []
content {
destination = var.destination_on_failure
}
}
dynamic "on_success" {
for_each = var.destination_on_success != null ? [true] : []
content {
destination = var.destination_on_success
}
}
}
}
}
So the destination_on_success is only used in this resource and this resources is only invoked if several conditions are met. The key one being var.create_async_event_config must be true.
You can see the example for this here https://github.com/terraform-aws-modules/terraform-aws-lambda/blob/be6cf9701071bf807cd7864fbcc751ed2552e434/examples/async/main.tf
module "lambda_function" {
source = "../../"
function_name = "${random_pet.this.id}-lambda-async"
handler = "index.lambda_handler"
runtime = "python3.8"
architectures = ["arm64"]
source_path = "${path.module}/../fixtures/python3.8-app1"
create_async_event_config = true
attach_async_event_policy = true
maximum_event_age_in_seconds = 100
maximum_retry_attempts = 1
destination_on_failure = aws_sns_topic.async.arn
destination_on_success = aws_sqs_queue.async.arn
}
|
Terraform AWS: SQS destination for Lambda doesn't get added
|
I have a working AWS project that I'm trying to implement in Terraform.
One of the steps requires a lambda function to query athena and return results to SQS (I am using this module for lambda instead of the original resource). Here is the code:
data "archive_file" "go_package" {
type = "zip"
source_file = "./report_to_SQS_go/main"
output_path = "./report_to_SQS_go/main.zip"
}
resource "aws_sqs_queue" "emails_queue" {
name = "sendEmails_tf"
}
module "lambda_report_to_sqs" {
source = "terraform-aws-modules/lambda/aws"
function_name = "report_to_SQS_Go_tf"
handler = "main"
runtime = "go1.x"
create_package = false
local_existing_package = "./report_to_SQS_go/main.zip"
attach_policy_json = true
policy_json = jsonencode({
Version = "2012-10-17"
Statement = [
{
Effect : "Allow"
Action : [
"dynamodb:*",
"lambda:*",
"logs:*",
"athena:*",
"cloudwatch:*",
"s3:*",
"sqs:*"
]
Resource : ["*"]
}
]
})
destination_on_success = aws_sqs_queue.emails_queue.arn
timeout = 200
memory_size = 1024
}
The code works fine and produces the desired output; however, the problem is, SQS doesn't show up as a destination (although the Queue shows up in SQS normally and can send/recieve messages).
I don't think permissions are the problem because I can add SQS destinations manually from the console successfully.
|
[
"The variable destination_on_success is only used if you also set create_async_event_config as true. Below is extracted from https://github.com/terraform-aws-modules/terraform-aws-lambda/blob/master\nvariables.tf\n############################\n# Lambda Async Event Config\n############################\n\nvariable \"create_async_event_config\" {\n description = \"Controls whether async event configuration for Lambda Function/Alias should be created\"\n type = bool\n default = false\n}\n\nvariable \"create_current_version_async_event_config\" {\n description = \"Whether to allow async event configuration on current version of Lambda Function (this will revoke permissions from previous version because Terraform manages only current resources)\"\n type = bool\n default = true\n}\n\n.....\n\nvariable \"destination_on_failure\" {\n description = \"Amazon Resource Name (ARN) of the destination resource for failed asynchronous invocations\"\n type = string\n default = null\n}\n\nvariable \"destination_on_success\" {\n description = \"Amazon Resource Name (ARN) of the destination resource for successful asynchronous invocations\"\n type = string\n default = null\n}\n\nmain.tf\nresource \"aws_lambda_function_event_invoke_config\" \"this\" {\n for_each = { for k, v in local.qualifiers : k => v if v != null && local.create && var.create_function && !var.create_layer && var.create_async_event_config }\n\n function_name = aws_lambda_function.this[0].function_name\n qualifier = each.key == \"current_version\" ? aws_lambda_function.this[0].version : null\n\n maximum_event_age_in_seconds = var.maximum_event_age_in_seconds\n maximum_retry_attempts = var.maximum_retry_attempts\n\n dynamic \"destination_config\" {\n for_each = var.destination_on_failure != null || var.destination_on_success != null ? [true] : []\n content {\n dynamic \"on_failure\" {\n for_each = var.destination_on_failure != null ? [true] : []\n content {\n destination = var.destination_on_failure\n }\n }\n\n dynamic \"on_success\" {\n for_each = var.destination_on_success != null ? [true] : []\n content {\n destination = var.destination_on_success\n }\n }\n }\n }\n}\n\nSo the destination_on_success is only used in this resource and this resources is only invoked if several conditions are met. The key one being var.create_async_event_config must be true.\nYou can see the example for this here https://github.com/terraform-aws-modules/terraform-aws-lambda/blob/be6cf9701071bf807cd7864fbcc751ed2552e434/examples/async/main.tf\nmodule \"lambda_function\" {\n source = \"../../\"\n\n function_name = \"${random_pet.this.id}-lambda-async\"\n handler = \"index.lambda_handler\"\n runtime = \"python3.8\"\n architectures = [\"arm64\"]\n\n source_path = \"${path.module}/../fixtures/python3.8-app1\"\n\n create_async_event_config = true\n attach_async_event_policy = true\n\n maximum_event_age_in_seconds = 100\n maximum_retry_attempts = 1\n\n destination_on_failure = aws_sns_topic.async.arn\n destination_on_success = aws_sqs_queue.async.arn\n}\n\n"
] |
[
1
] |
[] |
[] |
[
"amazon_sqs",
"amazon_web_services",
"aws_lambda",
"terraform",
"terraform_provider_aws"
] |
stackoverflow_0074658144_amazon_sqs_amazon_web_services_aws_lambda_terraform_terraform_provider_aws.txt
|
Q:
Angular pagination with data sent from backend
I must have a pagination in my table, and the backend already brings me how much data I can show, but I don't know how to apply it to the pagination.
What is marked in red is how I should add them to the table so that the data is loaded.
enter image description here
and in html I don't know how to make it work.
<mat-paginator length="pagLenght" [pageSize]="10"
aria-label="Select page" >
</mat-paginator>
A:
can you check out ngx-pagination? its probably the simplest i have used.
Your template
<ul>
<li *ngFor="let item of collection | paginate: { itemsPerPage: 10, currentPage: p }">{{ item }}</li>
</ul>
<pagination-controls (pageChange)="p = $event"></pagination-controls>
Make sure you import in your App Module and declare variable "p" which is the page tracker in your component.
Let me know if it works for you. Cheers
|
Angular pagination with data sent from backend
|
I must have a pagination in my table, and the backend already brings me how much data I can show, but I don't know how to apply it to the pagination.
What is marked in red is how I should add them to the table so that the data is loaded.
enter image description here
and in html I don't know how to make it work.
<mat-paginator length="pagLenght" [pageSize]="10"
aria-label="Select page" >
</mat-paginator>
|
[
"can you check out ngx-pagination? its probably the simplest i have used.\nYour template\n<ul>\n <li *ngFor=\"let item of collection | paginate: { itemsPerPage: 10, currentPage: p }\">{{ item }}</li>\n </ul>\n\n <pagination-controls (pageChange)=\"p = $event\"></pagination-controls>\n\nMake sure you import in your App Module and declare variable \"p\" which is the page tracker in your component.\nLet me know if it works for you. Cheers\n"
] |
[
0
] |
[] |
[] |
[
"angular",
"html",
"pagination",
"typescript"
] |
stackoverflow_0074642086_angular_html_pagination_typescript.txt
|
Q:
Django3 Paginator with function based view
class ProductList(ListView):
model = Product
paginate_by = 8
def company_page(request, slug):
...
product_list = Product.objects.filter(company=company).order_by('-pk')
paginator = Paginator(product_list, 4)
page_number = request.GET.get('page')
page_obj = paginator.get_page(page_number)
return render(request, 'product/product_list.html', {
...,
'product_list': product_list,
'page_obj': page_obj
})
views.py
<nav aria-label="Pagination">
<ul class="pagination justify-content-center my-5">
{% if page_obj.has_previous %}
<li class="page-item mx-auto lead">
<a class="page-link" href="?page={{page_obj.previous_page_number}}" tabindex="-1" aria-disabled="true">
Newer</a>
</li>
{% else %}
<li class="page-item disabled">
<a class="page-link" href="#" tabindex="-1" aria-disabled="true">
Newer</a>
</li>
{% endif %}
{% if page_obj.has_next %}
<li class="page-item mx-auto lead">
<a class="page-link" href="?page={{page_obj.next_page_number}}">
Older</a>
</li>
{% else %}
<li class="page-item disabled mx-auto lead">
<a class="page-link" href="#!">
Older</a>
</li>
{% endif %}
</ul>
</nav>
product_list.html
I added pagination on ProductList view with paginated_by and imported Paginator to make other pages using a function view but it's only paginated on ProductList view and doesn't work on company_page view. The Newer & Older buttons work but the page keeps showing every product_list objects. How can I make it work on all pages?
A:
Try this:
views.py
# other views ...
def company_page(request, slug):
product_list = Product.objects.filter(company=company).order_by('-pk')
page = request.GET.get('page', 1)
paginator = Paginator(product_list, 4)
try:
Products = paginator.page(page)
except PageNotAnInteger:
Products = paginator.page(1)
except EmptyPage:
Products = paginator.page(paginator.num_pages)
context = {'Products': Products}
return render(request, 'product/product_list.html', context)
product_list.html
<nav aria-label="Pagination">
{% if Products.has_other_pages %}
<ul class="pagination justify-content-center my-5">
{% if Products.has_previous %}
<li class="page-item mx-auto lead">
<a class="page-link" href="?page={{ Products.previous_page_number }}" tabindex="-1" aria-disabled="true">
Previous</a>
</li>
{% else %}
<li class="page-item disabled">
<a class="page-link" href="#" tabindex="-1" aria-disabled="true">
Previous</a>
</li>
{% endif %}
{% for i in Products.paginator.page_range %}
{% if Products.number == i %}
<li class="page-item mx-auto lead">
<a class="page-link" href="?page={{ i }}">
{{ i }}</a>
</li>
{% endif %}
{% endfor %}
{% if Products.has_next %}
<li class="page-item mx-auto lead">
<a class="page-link" href="?page={{ Products.next_page_number }}">
Next</a>
</li>
{% else %}
<li class="page-item disabled mx-auto lead">
<a class="page-link" href="#!">
Next</a>
</li>
{% endif %}
</ul>
{% endif %}
</nav>
|
Django3 Paginator with function based view
|
class ProductList(ListView):
model = Product
paginate_by = 8
def company_page(request, slug):
...
product_list = Product.objects.filter(company=company).order_by('-pk')
paginator = Paginator(product_list, 4)
page_number = request.GET.get('page')
page_obj = paginator.get_page(page_number)
return render(request, 'product/product_list.html', {
...,
'product_list': product_list,
'page_obj': page_obj
})
views.py
<nav aria-label="Pagination">
<ul class="pagination justify-content-center my-5">
{% if page_obj.has_previous %}
<li class="page-item mx-auto lead">
<a class="page-link" href="?page={{page_obj.previous_page_number}}" tabindex="-1" aria-disabled="true">
Newer</a>
</li>
{% else %}
<li class="page-item disabled">
<a class="page-link" href="#" tabindex="-1" aria-disabled="true">
Newer</a>
</li>
{% endif %}
{% if page_obj.has_next %}
<li class="page-item mx-auto lead">
<a class="page-link" href="?page={{page_obj.next_page_number}}">
Older</a>
</li>
{% else %}
<li class="page-item disabled mx-auto lead">
<a class="page-link" href="#!">
Older</a>
</li>
{% endif %}
</ul>
</nav>
product_list.html
I added pagination on ProductList view with paginated_by and imported Paginator to make other pages using a function view but it's only paginated on ProductList view and doesn't work on company_page view. The Newer & Older buttons work but the page keeps showing every product_list objects. How can I make it work on all pages?
|
[
"Try this:\nviews.py\n# other views ...\n\ndef company_page(request, slug):\n product_list = Product.objects.filter(company=company).order_by('-pk')\n\n page = request.GET.get('page', 1)\n paginator = Paginator(product_list, 4)\n\n try:\n Products = paginator.page(page)\n except PageNotAnInteger:\n Products = paginator.page(1)\n except EmptyPage:\n Products = paginator.page(paginator.num_pages)\n\n context = {'Products': Products}\n return render(request, 'product/product_list.html', context)\n\nproduct_list.html\n<nav aria-label=\"Pagination\">\n{% if Products.has_other_pages %}\n <ul class=\"pagination justify-content-center my-5\">\n {% if Products.has_previous %}\n <li class=\"page-item mx-auto lead\">\n <a class=\"page-link\" href=\"?page={{ Products.previous_page_number }}\" tabindex=\"-1\" aria-disabled=\"true\">\n Previous</a>\n </li>\n {% else %}\n <li class=\"page-item disabled\">\n <a class=\"page-link\" href=\"#\" tabindex=\"-1\" aria-disabled=\"true\">\n Previous</a>\n </li>\n {% endif %}\n {% for i in Products.paginator.page_range %}\n {% if Products.number == i %}\n <li class=\"page-item mx-auto lead\">\n <a class=\"page-link\" href=\"?page={{ i }}\">\n {{ i }}</a>\n </li>\n {% endif %}\n {% endfor %}\n {% if Products.has_next %}\n\n <li class=\"page-item mx-auto lead\">\n <a class=\"page-link\" href=\"?page={{ Products.next_page_number }}\">\n Next</a>\n </li>\n {% else %}\n <li class=\"page-item disabled mx-auto lead\">\n <a class=\"page-link\" href=\"#!\">\n Next</a>\n </li>\n {% endif %}\n </ul>\n {% endif %}\n\n </nav>\n\n"
] |
[
0
] |
[] |
[] |
[
"django",
"python"
] |
stackoverflow_0074646099_django_python.txt
|
Q:
Prediction with GPU is much slower than with CPU?
curiously I just found out that my CPU is much faster for predictions.
Doing inference with GPU is much slower then with CPU.
I have tf.keras (tf2) NN model with a simple dense layer:
input = tf.keras.layers.Input(shape=(100,), dtype='float32')
X = X = tf.keras.layers.Dense(2)(input)
model = tf.keras.Model(input,X)
#also initiialized with weights from a file
weights = np.load("weights.npy", allow_pickle=True )
model.layers[-1].set_weights(weights)
scores = model.predict_on_batch(data)
For 100 samples doing predictions I get:
2 s for GPU
0.07 s for CPU (!)
I am using a simple geforce mx150 with 2gb
I also tried the predict_on_batch(x) as someone suggested this as it is more faster than just predict. But here it is of same time.
Refer: Why does keras model predict slower after compile?
Has anyone an idea, what is going on there? What could be an issue possibly?
A:
Using the GPU puts a lot of overhead to load data on the GPU memory (through the relatively slow PCI bus) and to get the results back.
In order for the GPU to be more efficient than the CPU, the model must to be very big, have plenty of data and use algorithms that can run fully inside the GPU, without requiring partial results to be moved back to the CPU.
The optimal configuration depends on the quantity of memory and of cores inside your GPU, so you must do some tests, but the following rules apply:
Your NN must have at least >10k parameters, training data set must have at least 10k records. Otherwise your overhead will probably kill the performances of GPU
When you model.fit, use a large batch_size (pay attention, the default is only 32), possibly to contain your whole dataset, or at least a multiple of 1024. Do some test to find the optimum for you.
For some GPUs, it might help performing computations in float16 instead of float32. Follow this tutorial to see how to activate it.
If your GPU has specific Tensor Cores, in order to use efficiently its hardware, several data must be multiples of 8. In the preceding tutorial, see at the paragraph "Ensuring GPU Tensor Cores are used" what parameters must be changed and how. In general, it's a bad idea to use layers which contain a number of neurons not multiple of 8.
Some type of layers, namely RNNs, have an architecture which cannot be solved directly by the GPU. In this case, data must be moved constantly back and forth to CPU and the speed is lost. If a RNN is really needed, Tensorflow v2 has an implementation of the LSTM layer which is optimized for GPU, but some limitations on the parameters are present: see this thread and the documetation.
If you are training a Reinforcement Learning, activate an Experience Replay and use a memory buffer for the experience which is at least >10x your batch_size. This way, you will activate the NN training only when a big bunch of data is ready.
Deactivate as much verbosity as possible
If everything is set up correctly, you should be able to train your model faster with GPU than with CPU.
A:
GPU is good if you have compute-intensive tasks (large models) due to the overhead of copying your data and results between the host and GPU. In your case, the model is very small. It means it will take you longer to copy data than to predict. Even if the CPU is slower than the GPU, you don't have to copy the data, so it's ultimately faster.
|
Prediction with GPU is much slower than with CPU?
|
curiously I just found out that my CPU is much faster for predictions.
Doing inference with GPU is much slower then with CPU.
I have tf.keras (tf2) NN model with a simple dense layer:
input = tf.keras.layers.Input(shape=(100,), dtype='float32')
X = X = tf.keras.layers.Dense(2)(input)
model = tf.keras.Model(input,X)
#also initiialized with weights from a file
weights = np.load("weights.npy", allow_pickle=True )
model.layers[-1].set_weights(weights)
scores = model.predict_on_batch(data)
For 100 samples doing predictions I get:
2 s for GPU
0.07 s for CPU (!)
I am using a simple geforce mx150 with 2gb
I also tried the predict_on_batch(x) as someone suggested this as it is more faster than just predict. But here it is of same time.
Refer: Why does keras model predict slower after compile?
Has anyone an idea, what is going on there? What could be an issue possibly?
|
[
"Using the GPU puts a lot of overhead to load data on the GPU memory (through the relatively slow PCI bus) and to get the results back.\nIn order for the GPU to be more efficient than the CPU, the model must to be very big, have plenty of data and use algorithms that can run fully inside the GPU, without requiring partial results to be moved back to the CPU.\nThe optimal configuration depends on the quantity of memory and of cores inside your GPU, so you must do some tests, but the following rules apply:\n\nYour NN must have at least >10k parameters, training data set must have at least 10k records. Otherwise your overhead will probably kill the performances of GPU\nWhen you model.fit, use a large batch_size (pay attention, the default is only 32), possibly to contain your whole dataset, or at least a multiple of 1024. Do some test to find the optimum for you.\nFor some GPUs, it might help performing computations in float16 instead of float32. Follow this tutorial to see how to activate it.\nIf your GPU has specific Tensor Cores, in order to use efficiently its hardware, several data must be multiples of 8. In the preceding tutorial, see at the paragraph \"Ensuring GPU Tensor Cores are used\" what parameters must be changed and how. In general, it's a bad idea to use layers which contain a number of neurons not multiple of 8.\nSome type of layers, namely RNNs, have an architecture which cannot be solved directly by the GPU. In this case, data must be moved constantly back and forth to CPU and the speed is lost. If a RNN is really needed, Tensorflow v2 has an implementation of the LSTM layer which is optimized for GPU, but some limitations on the parameters are present: see this thread and the documetation.\nIf you are training a Reinforcement Learning, activate an Experience Replay and use a memory buffer for the experience which is at least >10x your batch_size. This way, you will activate the NN training only when a big bunch of data is ready.\nDeactivate as much verbosity as possible\n\nIf everything is set up correctly, you should be able to train your model faster with GPU than with CPU.\n",
"GPU is good if you have compute-intensive tasks (large models) due to the overhead of copying your data and results between the host and GPU. In your case, the model is very small. It means it will take you longer to copy data than to predict. Even if the CPU is slower than the GPU, you don't have to copy the data, so it's ultimately faster.\n"
] |
[
0,
0
] |
[] |
[] |
[
"keras",
"tensorflow"
] |
stackoverflow_0065361820_keras_tensorflow.txt
|
Q:
Count specific days between two dates based on a rolling fortnight
I am creating a scheduled delivery system.
I put together the following script that counts the quantity of specific days of the week between two dates in dd/mm/yyyy format (UK).
var date0 = "01/02/2021";
var date1 = "31/01/2022";
var dayList = [1]; // Monday based on Sunday being 0
var test = countDays(dayList,parseDate(date0),parseDate(date1));
alert(test);
function parseDate(str) {
var dmy = str.split('/');
return new Date(+dmy[2],dmy[1] - 1,+dmy[0]);
}
function countDays(days,fromDate,toDate) {
var ndays = 1 + Math.round((toDate-fromDate)/(24*3600*1000));
var sum = function(a,b) {
return a + Math.floor((ndays+(fromDate.getDay()+6-b)%7)/7);
};
return days.reduce(sum,0);
}
I this particular instance the dayList=[1] is 'Monday'. The script counts the number of Monday's between the two dates and we get the result 53 (because it happens that 01/02/2021 falls on a Monday).
This would works great if I want to find out how many Mondays were between two dates based on a rolling week.
How can I modify this to allow for a rolling fortnight?
For example I might want to find out how many weekly Mondays and alternate Fridays there are based on a rolling fortnight, not a rolling week. I have to allow for a mixture of weekly and fortnightly deliveries.
For example:
In this case the client wants a delivery every Monday and an extra delivery every other Friday.
Sat Sun Mon Tue Wed Thu Fri Sat Sun Mon Tue Wed Thu Fri
x x x
Schedules vary enormously. The rolling fortnight starts on the start date provided by the user.
What I am looking for is the total number of days between a start and end date but I am struggling with factoring the 'rolling fortnight' aspect of this.
A:
I think you could get the day names between the dates, and then use a reduce to count how many times each appears, and then use that to calculate how many deliveries there are:
const start = new Date('2022-11-17T03:24:00');
const endDate = new Date('2022-12-31T03:24:00');
const getDaysBetweenDates = function*(s, e) {
let currentDate = s;
while (currentDate < e) {
yield currentDate.toLocaleDateString("en-us", { weekday: 'long' });
currentDate = new Date(currentDate.getTime() + (24 * 60 * 60 * 1000));
}
}
const dayCounts = Array.from(getDaysBetweenDates(start, endDate))
.reduce((p, c) => ({
...p,
[c]: (p.hasOwnProperty(c) ? p[c] + 1 : 1)
}), {});
console.log(dayCounts.Monday + Math.floor(dayCounts.Friday / 2));
A:
This is the solution I came up with. I am sure that this can be expanded further but I have the ability to store an array of the delivery dates and (now that I have the total number of deliveries) I can calculate costs based on other information.
I hope this is of help.
function parseDate(str) {
//handles the dd/mm/yyyy format UK date
var dmy = str.split('/');
return new Date(+dmy[2], dmy[1] - 1, + dmy[0]);
}
var fromDate = parseDate(document.getElementById("startdate").value); // convert the date to js date
var toDate = parseDate(document.getElementById("enddate").value); // convert the date to js date
var thisDay = fromDate.getDay();
var thisWeek;
var thisValue;
var testWeek;
var testElement;
var result = 0;
while (fromDate <= toDate) {
if (thisDay >= 14) {
thisDay = 1;
}
thisWeek = Math.ceil(thisDay / 7);
for (i = 1; i <= 14; i++) {
testElement = document.getElementById("day" + i);
testWeek = Math.ceil(i / 7);
if ((testElement.checked) && (thisDay == i) && (thisWeek == testWeek)) {
result++;
}
}
thisDay++;
fromDate.setDate(fromDate.getDate() + 1);
}
alert("Number of deliveries is: " + result);
<input name="startdate" id="startdate" type="text" value="01/02/2021"/>
<input name="enddate" id="enddate" type="text" value="14/05/2021"/>
<table>
<thead>
<tr>
<th>Mon</th>
<th>Tue</th>
<th>Wed</th>
<th>Thu</th>
<th>Fri</th>
<th>Sat</th>
<th>Sun</th>
<th>Mon</th>
<th>Tue</th>
<th>Wed</th>
<th>Thu</th>
<th>Fri</th>
<th>Sat</th>
<th>Sun</th>
</tr>
</thead>
<tbody>
<tr>
<td><input class="tick checkboxgroup" type="checkbox" name="day1" id="day1" checked="checked"/></td>
<td><input class="tick checkboxgroup" type="checkbox" name="day2" id="day2"/></td>
<td><input class="tick checkboxgroup" type="checkbox" name="day3" id="day3"/></td>
<td><input class="tick checkboxgroup" type="checkbox" name="day4" id="day4"/></td>
<td><input class="tick checkboxgroup" type="checkbox" name="day5" id="day5"/></td>
<td><input class="tick checkboxgroup" type="checkbox" name="day6" id="day6"/></td>
<td><input class="tick checkboxgroup" type="checkbox" name="day7" id="day7"/></td>
<td><input class="tick checkboxgroup" type="checkbox" name="day8" id="day8" checked="checked"/></td>
<td><input class="tick checkboxgroup" type="checkbox" name="day9" id="day9"/></td>
<td><input class="tick checkboxgroup" type="checkbox" name="day10" id="day10"/></td>
<td><input class="tick checkboxgroup" type="checkbox" name="day11" id="day11"/></td>
<td><input class="tick checkboxgroup" type="checkbox" name="day12" id="day12" checked="checked"/></td>
<td><input class="tick checkboxgroup" type="checkbox" name="day13" id="day13"/></td>
<td><input class="tick checkboxgroup" type="checkbox" name="day14" id="day14"/></td>
</tr>
</tbody>
</table>
|
Count specific days between two dates based on a rolling fortnight
|
I am creating a scheduled delivery system.
I put together the following script that counts the quantity of specific days of the week between two dates in dd/mm/yyyy format (UK).
var date0 = "01/02/2021";
var date1 = "31/01/2022";
var dayList = [1]; // Monday based on Sunday being 0
var test = countDays(dayList,parseDate(date0),parseDate(date1));
alert(test);
function parseDate(str) {
var dmy = str.split('/');
return new Date(+dmy[2],dmy[1] - 1,+dmy[0]);
}
function countDays(days,fromDate,toDate) {
var ndays = 1 + Math.round((toDate-fromDate)/(24*3600*1000));
var sum = function(a,b) {
return a + Math.floor((ndays+(fromDate.getDay()+6-b)%7)/7);
};
return days.reduce(sum,0);
}
I this particular instance the dayList=[1] is 'Monday'. The script counts the number of Monday's between the two dates and we get the result 53 (because it happens that 01/02/2021 falls on a Monday).
This would works great if I want to find out how many Mondays were between two dates based on a rolling week.
How can I modify this to allow for a rolling fortnight?
For example I might want to find out how many weekly Mondays and alternate Fridays there are based on a rolling fortnight, not a rolling week. I have to allow for a mixture of weekly and fortnightly deliveries.
For example:
In this case the client wants a delivery every Monday and an extra delivery every other Friday.
Sat Sun Mon Tue Wed Thu Fri Sat Sun Mon Tue Wed Thu Fri
x x x
Schedules vary enormously. The rolling fortnight starts on the start date provided by the user.
What I am looking for is the total number of days between a start and end date but I am struggling with factoring the 'rolling fortnight' aspect of this.
|
[
"I think you could get the day names between the dates, and then use a reduce to count how many times each appears, and then use that to calculate how many deliveries there are:\n\n\nconst start = new Date('2022-11-17T03:24:00');\nconst endDate = new Date('2022-12-31T03:24:00');\n\nconst getDaysBetweenDates = function*(s, e) {\n let currentDate = s;\n \n while (currentDate < e) {\n yield currentDate.toLocaleDateString(\"en-us\", { weekday: 'long' });\n currentDate = new Date(currentDate.getTime() + (24 * 60 * 60 * 1000));\n }\n}\n\n\nconst dayCounts = Array.from(getDaysBetweenDates(start, endDate))\n .reduce((p, c) => ({\n ...p,\n [c]: (p.hasOwnProperty(c) ? p[c] + 1 : 1)\n }), {});\n\nconsole.log(dayCounts.Monday + Math.floor(dayCounts.Friday / 2));\n\n\n\n",
"This is the solution I came up with. I am sure that this can be expanded further but I have the ability to store an array of the delivery dates and (now that I have the total number of deliveries) I can calculate costs based on other information.\nI hope this is of help.\n\n\nfunction parseDate(str) {\n //handles the dd/mm/yyyy format UK date\n var dmy = str.split('/');\n return new Date(+dmy[2], dmy[1] - 1, + dmy[0]); \n} \n\nvar fromDate = parseDate(document.getElementById(\"startdate\").value); // convert the date to js date \nvar toDate = parseDate(document.getElementById(\"enddate\").value); // convert the date to js date \nvar thisDay = fromDate.getDay();\n\nvar thisWeek;\nvar thisValue;\nvar testWeek;\nvar testElement; \nvar result = 0;\n\nwhile (fromDate <= toDate) {\n if (thisDay >= 14) {\n thisDay = 1;\n }\n thisWeek = Math.ceil(thisDay / 7);\n for (i = 1; i <= 14; i++) {\n testElement = document.getElementById(\"day\" + i);\n testWeek = Math.ceil(i / 7);\n if ((testElement.checked) && (thisDay == i) && (thisWeek == testWeek)) {\n result++;\n }\n }\n thisDay++;\n fromDate.setDate(fromDate.getDate() + 1);\n} \nalert(\"Number of deliveries is: \" + result);\n<input name=\"startdate\" id=\"startdate\" type=\"text\" value=\"01/02/2021\"/>\n<input name=\"enddate\" id=\"enddate\" type=\"text\" value=\"14/05/2021\"/>\n<table>\n <thead>\n <tr>\n <th>Mon</th>\n <th>Tue</th>\n <th>Wed</th>\n <th>Thu</th>\n <th>Fri</th>\n <th>Sat</th>\n <th>Sun</th>\n <th>Mon</th>\n <th>Tue</th>\n <th>Wed</th>\n <th>Thu</th>\n <th>Fri</th>\n <th>Sat</th>\n <th>Sun</th>\n </tr>\n </thead>\n <tbody>\n <tr>\n <td><input class=\"tick checkboxgroup\" type=\"checkbox\" name=\"day1\" id=\"day1\" checked=\"checked\"/></td>\n <td><input class=\"tick checkboxgroup\" type=\"checkbox\" name=\"day2\" id=\"day2\"/></td>\n <td><input class=\"tick checkboxgroup\" type=\"checkbox\" name=\"day3\" id=\"day3\"/></td>\n <td><input class=\"tick checkboxgroup\" type=\"checkbox\" name=\"day4\" id=\"day4\"/></td>\n <td><input class=\"tick checkboxgroup\" type=\"checkbox\" name=\"day5\" id=\"day5\"/></td>\n <td><input class=\"tick checkboxgroup\" type=\"checkbox\" name=\"day6\" id=\"day6\"/></td>\n <td><input class=\"tick checkboxgroup\" type=\"checkbox\" name=\"day7\" id=\"day7\"/></td>\n <td><input class=\"tick checkboxgroup\" type=\"checkbox\" name=\"day8\" id=\"day8\" checked=\"checked\"/></td>\n <td><input class=\"tick checkboxgroup\" type=\"checkbox\" name=\"day9\" id=\"day9\"/></td>\n <td><input class=\"tick checkboxgroup\" type=\"checkbox\" name=\"day10\" id=\"day10\"/></td>\n <td><input class=\"tick checkboxgroup\" type=\"checkbox\" name=\"day11\" id=\"day11\"/></td>\n <td><input class=\"tick checkboxgroup\" type=\"checkbox\" name=\"day12\" id=\"day12\" checked=\"checked\"/></td>\n <td><input class=\"tick checkboxgroup\" type=\"checkbox\" name=\"day13\" id=\"day13\"/></td>\n <td><input class=\"tick checkboxgroup\" type=\"checkbox\" name=\"day14\" id=\"day14\"/></td>\n </tr>\n </tbody>\n</table>\n\n\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"javascript"
] |
stackoverflow_0074647864_javascript.txt
|
Q:
Kubernetes - Handle cronjobs like crontab
I have a lot of cronjobs I need to set on Kubernetes.
I want a file to manage them all and set them to Kubernetes on deployment. I wish that if I remove a cron from that file it will be removed from Kubernetes too.
Basically, I want to handle the corns like I'm handling them today on the machine (from a cron file that I would deploy). Add, remove and change crons.
I couldn't find a way of doing so. Does someone have an idea?
Library or framework I can use like helm? Or any other solution.
A:
I highly recommend using gitops with argocd as a solution for Kubernetes configure management. Run crontab in deployment is a bad ideal because it hard to monitor your job result (cronjob job result can be get by kube-state-metrics exporter).
The ideal is packaging your manifest (it may be kubernetes manifest, kustomize, helm...etc...) -> put them to git -> argocd makes sure your configure deployed correctly
The advantages of gitops are include:
centralize your configuration
versioning your configuration
git authentication & authorization
traceable
multi-cluster deployment with argocd
automation deployment & sync
...
Gitops is not a difficult and is the mordern way for kubernetes configure management. Let's try
A:
I used Helm to do so. I built a template to go over all crons, which I inserted as values to the helm template (Very similar to crontab but more structured) - see in the example.
Then, all I need to do is run a helm upgrade with a new corn (values) file and it updates everything accordingly. If I updated, removed, or added a new corn everything is happening automatically and with versioning. You can also add a namespace to your cronjobs to make it more encapsulated.
Here is a very good and easy-to-understand example I used. And its git repo
|
Kubernetes - Handle cronjobs like crontab
|
I have a lot of cronjobs I need to set on Kubernetes.
I want a file to manage them all and set them to Kubernetes on deployment. I wish that if I remove a cron from that file it will be removed from Kubernetes too.
Basically, I want to handle the corns like I'm handling them today on the machine (from a cron file that I would deploy). Add, remove and change crons.
I couldn't find a way of doing so. Does someone have an idea?
Library or framework I can use like helm? Or any other solution.
|
[
"I highly recommend using gitops with argocd as a solution for Kubernetes configure management. Run crontab in deployment is a bad ideal because it hard to monitor your job result (cronjob job result can be get by kube-state-metrics exporter).\nThe ideal is packaging your manifest (it may be kubernetes manifest, kustomize, helm...etc...) -> put them to git -> argocd makes sure your configure deployed correctly\nThe advantages of gitops are include:\n\ncentralize your configuration\nversioning your configuration\ngit authentication & authorization\ntraceable\nmulti-cluster deployment with argocd\nautomation deployment & sync\n...\n\nGitops is not a difficult and is the mordern way for kubernetes configure management. Let's try\n",
"I used Helm to do so. I built a template to go over all crons, which I inserted as values to the helm template (Very similar to crontab but more structured) - see in the example.\nThen, all I need to do is run a helm upgrade with a new corn (values) file and it updates everything accordingly. If I updated, removed, or added a new corn everything is happening automatically and with versioning. You can also add a namespace to your cronjobs to make it more encapsulated.\nHere is a very good and easy-to-understand example I used. And its git repo\n"
] |
[
0,
0
] |
[] |
[] |
[
"kubernetes",
"kubernetes_cronjob"
] |
stackoverflow_0074636071_kubernetes_kubernetes_cronjob.txt
|
Q:
How to return a variable from function that gets variable from another function in JavaScript?
I have multiple functions that pass data from one to another. I moved one of the functions to the backend and receive data as API with Axios. Now I cannot manage to assign data from Axios to some local variable.
Simple code would be like:
function function1()
{
axios({get, url})
.then(response => {
globalVariable = response.data;
function2(globalVariable);
}
function function2(globalVariable)
{
const local = globalVariable;
return local;
}
And then inside of function3, I want to do:
function function3()
{
const from_local = function2()
from_local
}
When I try this I receive undefined result. Please help.
A:
It looks like you are looking for some sort of piping asynchronous operation. By piping I mean result of one function execution will be feed to another.
So basically function1 which is mimicking a axios operation here.
// making an api call
function function1() {
return fetch('https://jsonplaceholder.typicode.com/todos/1').then((d) =>
d.json()
);
}
// some random function
function function2(data) {
console.log(' Calling function 2 ');
return data?.title;
}
// some random function
function function3(data) {
console.log(' Calling function 3 ');
return `Hello ${data}`;
}
/** a function to resolve functions sequentially. The result of first
function will be input to another function.
Here ...fns is creating an array like object
so array operations can be performed here **/
const runAsynFunctions = (...fns) => {
return (args) => {
return fns.reduce((acc, curr) => {
return acc.then(curr);
}, Promise.resolve(args));
};
};
// calling runAsynFunctions with and passing list of
// functions which need to resolved sequentially
const doOperation = runAsynFunctions(function2, function3);
// resolving the api call first and the passing the result to
// other functions
function1().then(async(response) => {
const res = await doOperation(response);
console.log(res);
});
A:
This is what promises are for. No need for globals or jumping through hoops to get the data out. Just remember to await any function that's async, (like axios) and annotate any function that contains an "await" as being async.
// note "async" because it contains await
async function backend() {
// note await because axios is async
const response = await axios({get, url});
return response.data;
}
// same thing up the calling chain
async function middleend() {
const local = await backend();
return local;
}
async function frontend() {
const local = await middleend();
console.log('ta da! here\'s the data', local);
}
|
How to return a variable from function that gets variable from another function in JavaScript?
|
I have multiple functions that pass data from one to another. I moved one of the functions to the backend and receive data as API with Axios. Now I cannot manage to assign data from Axios to some local variable.
Simple code would be like:
function function1()
{
axios({get, url})
.then(response => {
globalVariable = response.data;
function2(globalVariable);
}
function function2(globalVariable)
{
const local = globalVariable;
return local;
}
And then inside of function3, I want to do:
function function3()
{
const from_local = function2()
from_local
}
When I try this I receive undefined result. Please help.
|
[
"It looks like you are looking for some sort of piping asynchronous operation. By piping I mean result of one function execution will be feed to another.\nSo basically function1 which is mimicking a axios operation here.\n\n\n// making an api call\nfunction function1() {\n return fetch('https://jsonplaceholder.typicode.com/todos/1').then((d) =>\n d.json()\n );\n}\n// some random function\nfunction function2(data) {\n console.log(' Calling function 2 ');\n return data?.title;\n}\n// some random function\nfunction function3(data) {\n console.log(' Calling function 3 ');\n return `Hello ${data}`;\n}\n/** a function to resolve functions sequentially. The result of first \n function will be input to another function. \n Here ...fns is creating an array like object \n so array operations can be performed here **/\n\nconst runAsynFunctions = (...fns) => {\n return (args) => {\n return fns.reduce((acc, curr) => {\n return acc.then(curr);\n }, Promise.resolve(args)); \n };\n};\n// calling runAsynFunctions with and passing list of \n// functions which need to resolved sequentially \n\nconst doOperation = runAsynFunctions(function2, function3);\n\n// resolving the api call first and the passing the result to \n// other functions\n\nfunction1().then(async(response) => {\n const res = await doOperation(response);\n console.log(res);\n});\n\n\n\n",
"This is what promises are for. No need for globals or jumping through hoops to get the data out. Just remember to await any function that's async, (like axios) and annotate any function that contains an \"await\" as being async.\n// note \"async\" because it contains await\nasync function backend() {\n // note await because axios is async\n const response = await axios({get, url});\n return response.data;\n}\n\n// same thing up the calling chain\nasync function middleend() {\n const local = await backend();\n return local;\n}\n\nasync function frontend() {\n const local = await middleend();\n console.log('ta da! here\\'s the data', local);\n}\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"axios",
"javascript"
] |
stackoverflow_0074651340_axios_javascript.txt
|
Q:
.NET 6 Requests taking too long to reach server
I have a .NET 6 WebAPI that was working perfectly fine until last week in my local environment.
Now when I try to make any request from any source in my localhost, the requests take between 1 and 4 minutes to reach the controller of the WebAPI. When I deploy it to Azure it works perfectly fine, and no other dev seems to be having the same problem.
I'm not sure if this is a known bug of IIS or what's happening, but it looks like nobody else is having this issue, I suspect it has something to do with the connection pooling but I'm not sure how to manage it.
Thanks for the help!
A:
After trying a lot of things, decided to delete the whole project/repo from my pc and cloned it again and this solved it for me... I know it's not a solution but it stopped happening.
|
.NET 6 Requests taking too long to reach server
|
I have a .NET 6 WebAPI that was working perfectly fine until last week in my local environment.
Now when I try to make any request from any source in my localhost, the requests take between 1 and 4 minutes to reach the controller of the WebAPI. When I deploy it to Azure it works perfectly fine, and no other dev seems to be having the same problem.
I'm not sure if this is a known bug of IIS or what's happening, but it looks like nobody else is having this issue, I suspect it has something to do with the connection pooling but I'm not sure how to manage it.
Thanks for the help!
|
[
"After trying a lot of things, decided to delete the whole project/repo from my pc and cloned it again and this solved it for me... I know it's not a solution but it stopped happening.\n"
] |
[
0
] |
[] |
[] |
[
".net",
".net_6.0",
"iis",
"iis_express"
] |
stackoverflow_0074655465_.net_.net_6.0_iis_iis_express.txt
|
Q:
write a function that generates a list of last days of each month for the past n months from current date
I am trying to create a list of the last days of each month for the past n months from the current date but not including current month
I tried different approaches:
def last_n_month_end(n_months):
"""
Returns a list of the last n month end dates
"""
return [datetime.date.today().replace(day=1) - datetime.timedelta(days=1) - datetime.timedelta(days=30*i) for i in range(n_months)]
somehow this partly works if each every month only has 30 days and also not work in databricks pyspark. It returns AttributeError: 'method_descriptor' object has no attribute 'today'
I also tried the approach mentioned in Generate a sequence of the last days of all previous N months with a given month
def previous_month_ends(date, months):
year, month, day = [int(x) for x in date.split('-')]
d = datetime.date(year, month, day)
t = datetime.timedelta(1)
s = datetime.date(year, month, 1)
return [(x - t).strftime('%Y-%m-%d')
for m in range(months - 1, -1, -1)
for x in (datetime.date(s.year, s.month - m, s.day) if s.month > m else \
datetime.date(s.year - 1, s.month - (m - 12), s.day),)]
but I am not getting it correctly.
I also tried:
df = spark.createDataFrame([(1,)],['id'])
days = df.withColumn('last_dates', explode(expr('sequence(last_day(add_months(current_date(),-3)), last_day(add_months(current_date(), -1)), interval 1 month)')))
I got the last three months (Sep, oct, nov), but all of them are the 30th but Oct has Oct 31st. However, it gives me the correct last days when I put more than 3.
What I am trying to get is this:
(last days of the last 4 months not including last_day of current_date)
daterange = ['2022-08-31','2022-09-30','2022-10-31','2022-11-30']
A:
Not sure if this is the best or optimal way to do it, but this does it...
Requires the following package since datetime does not seem to have anyway to subtract months as far as I know without hardcoding the number of days or weeks. Not sure, so don't quote me on this....
Package Installation:
pip install python-dateutil
Edit: There was a misunderstanding from my end. I had assumed that all dates were required and not just the month ends. Anyways hope the updated code might help. Still not the most optimal, but easy to understand I guess..
# import datetime package
from datetime import date, timedelta
from dateutil.relativedelta import relativedelta
def previous_month_ends(months_to_subtract):
# get first day of current month
first_day_of_current_month = date.today().replace(day=1)
print(f"First Day of Current Month: {first_day_of_current_month}")
# Calculate and previous month's Last date
date_range_list = [first_day_of_current_month - relativedelta(days=1)]
cur_iter = 1
while cur_iter < months_to_subtract:
# Calculate First Day of previous months relative to first day of current month
cur_iter_fdom = first_day_of_current_month - relativedelta(months=cur_iter)
# Subtract one day to get the last day of previous month
cur_iter_ldom = cur_iter_fdom - relativedelta(days=1)
# Append to the list
date_range_list.append(cur_iter_ldom)
# Increment Counter
cur_iter+=1
return date_range_list
print(previous_month_ends(3))
Function to calculate date list between 2 dates:
Calculate the first of current month.
Calculate start and end dates and then loop through them to get the list of dates.
I have ignored the date argument, since I have assumed that it will be for current date. alternatively it can be added following your own code which should work perfectly.
# import datetime package
from datetime import date, timedelta
from dateutil.relativedelta import relativedelta
def gen_date_list(months_to_subtract):
# get first day of current month
first_day_of_current_month = date.today().replace(day=1)
print(f"First Day of Current Month: {first_day_of_current_month}")
start_date = first_day_of_current_month - relativedelta(months=months_to_subtract)
end_date = first_day_of_current_month - relativedelta(days=1)
print(f"Start Date: {start_date}")
print(f"End Date: {end_date}")
date_range_list = [start_date]
cur_iter_date = start_date
while cur_iter_date < end_date:
cur_iter_date += timedelta(days=1)
date_range_list.append(cur_iter_date)
# print(date_range_list)
return date_range_list
print(gen_date_list(3))
Hope it helps...Edits/Comments are welcome - I am learning myself...
A:
from datetime import datetime, timedelta
def get_last_dates(n_months):
'''
generates a list of lastdates for each month for the past n months
Param:
n_months = number of months back
'''
last_dates = [] # initiate an empty list
for i in range(n_months):
last_dates.append((datetime.today() - timedelta(days=i*30)).replace(day=1) - timedelta(days=1))
return last_dates
This should give you a more accurate last_days
A:
I just thought a work around I can use since my last codes work:
df = spark.createDataFrame([(1,)],['id'])
days = df.withColumn('last_dates', explode(expr('sequence(last_day(add_months(current_date(),-3)), last_day(add_months(current_date(), -1)), interval 1 month)')))
is to enter -4 and just remove the last_date that I do not need days.pop(0) that should give me the list of needed last_dates.
|
write a function that generates a list of last days of each month for the past n months from current date
|
I am trying to create a list of the last days of each month for the past n months from the current date but not including current month
I tried different approaches:
def last_n_month_end(n_months):
"""
Returns a list of the last n month end dates
"""
return [datetime.date.today().replace(day=1) - datetime.timedelta(days=1) - datetime.timedelta(days=30*i) for i in range(n_months)]
somehow this partly works if each every month only has 30 days and also not work in databricks pyspark. It returns AttributeError: 'method_descriptor' object has no attribute 'today'
I also tried the approach mentioned in Generate a sequence of the last days of all previous N months with a given month
def previous_month_ends(date, months):
year, month, day = [int(x) for x in date.split('-')]
d = datetime.date(year, month, day)
t = datetime.timedelta(1)
s = datetime.date(year, month, 1)
return [(x - t).strftime('%Y-%m-%d')
for m in range(months - 1, -1, -1)
for x in (datetime.date(s.year, s.month - m, s.day) if s.month > m else \
datetime.date(s.year - 1, s.month - (m - 12), s.day),)]
but I am not getting it correctly.
I also tried:
df = spark.createDataFrame([(1,)],['id'])
days = df.withColumn('last_dates', explode(expr('sequence(last_day(add_months(current_date(),-3)), last_day(add_months(current_date(), -1)), interval 1 month)')))
I got the last three months (Sep, oct, nov), but all of them are the 30th but Oct has Oct 31st. However, it gives me the correct last days when I put more than 3.
What I am trying to get is this:
(last days of the last 4 months not including last_day of current_date)
daterange = ['2022-08-31','2022-09-30','2022-10-31','2022-11-30']
|
[
"Not sure if this is the best or optimal way to do it, but this does it...\nRequires the following package since datetime does not seem to have anyway to subtract months as far as I know without hardcoding the number of days or weeks. Not sure, so don't quote me on this....\nPackage Installation:\npip install python-dateutil\n\nEdit: There was a misunderstanding from my end. I had assumed that all dates were required and not just the month ends. Anyways hope the updated code might help. Still not the most optimal, but easy to understand I guess..\n# import datetime package\nfrom datetime import date, timedelta\nfrom dateutil.relativedelta import relativedelta\n\n\ndef previous_month_ends(months_to_subtract):\n # get first day of current month\n first_day_of_current_month = date.today().replace(day=1)\n print(f\"First Day of Current Month: {first_day_of_current_month}\")\n # Calculate and previous month's Last date\n date_range_list = [first_day_of_current_month - relativedelta(days=1)]\n cur_iter = 1\n while cur_iter < months_to_subtract:\n # Calculate First Day of previous months relative to first day of current month\n cur_iter_fdom = first_day_of_current_month - relativedelta(months=cur_iter)\n # Subtract one day to get the last day of previous month\n cur_iter_ldom = cur_iter_fdom - relativedelta(days=1)\n # Append to the list\n date_range_list.append(cur_iter_ldom)\n # Increment Counter\n cur_iter+=1\n return date_range_list\n\nprint(previous_month_ends(3))\n\nFunction to calculate date list between 2 dates:\n\nCalculate the first of current month.\nCalculate start and end dates and then loop through them to get the list of dates.\nI have ignored the date argument, since I have assumed that it will be for current date. alternatively it can be added following your own code which should work perfectly.\n\n# import datetime package\nfrom datetime import date, timedelta\nfrom dateutil.relativedelta import relativedelta\n\n\ndef gen_date_list(months_to_subtract):\n # get first day of current month\n first_day_of_current_month = date.today().replace(day=1)\n print(f\"First Day of Current Month: {first_day_of_current_month}\")\n start_date = first_day_of_current_month - relativedelta(months=months_to_subtract)\n end_date = first_day_of_current_month - relativedelta(days=1)\n print(f\"Start Date: {start_date}\")\n print(f\"End Date: {end_date}\")\n date_range_list = [start_date]\n cur_iter_date = start_date\n while cur_iter_date < end_date:\n cur_iter_date += timedelta(days=1)\n date_range_list.append(cur_iter_date)\n # print(date_range_list)\n return date_range_list\n\nprint(gen_date_list(3))\n\nHope it helps...Edits/Comments are welcome - I am learning myself...\n",
"from datetime import datetime, timedelta\n\ndef get_last_dates(n_months):\n '''\n generates a list of lastdates for each month for the past n months\n Param:\n n_months = number of months back\n '''\n last_dates = [] # initiate an empty list\n for i in range(n_months):\n last_dates.append((datetime.today() - timedelta(days=i*30)).replace(day=1) - timedelta(days=1))\n return last_dates\n\nThis should give you a more accurate last_days\n",
"I just thought a work around I can use since my last codes work:\ndf = spark.createDataFrame([(1,)],['id'])\n\ndays = df.withColumn('last_dates', explode(expr('sequence(last_day(add_months(current_date(),-3)), last_day(add_months(current_date(), -1)), interval 1 month)')))\n\nis to enter -4 and just remove the last_date that I do not need days.pop(0) that should give me the list of needed last_dates.\n"
] |
[
2,
1,
1
] |
[] |
[] |
[
"databricks",
"date",
"pyspark"
] |
stackoverflow_0074649080_databricks_date_pyspark.txt
|
Q:
Ace editor - how do I remove/reset indent?
I want to change the behavior of the editor such that when the user presses enter on an empty list bullet, their cursor position is reset to the start of the line (rather than leaving them at the indented amount).
I've tried:
aceEdit.moveCursorTo(rowToUpdate, 0)
aceEdit.getSession().indentRows(rowToUpdate, rowToUpdate, "")
aceEdit.getSession().replace(range(rowToUpdate, 0, rowToUpdate, 0), "")
However, all three of these leave the cursor at the previous indent level. How do I reset the indent level for the line?
Update: adding example.
* list
* list
* list
* <- user presses enter here
_
Cursor is where I placed the underscore above, and can't be reset programmatically to the start of the line using what I listed above. (User can backspace the indents to get back to the start.)
A:
You can use editor.setOption("enableAutoIndent", false) to disable automatic indentation.
If you want a way to keep autoIndent in all cases except the list, try creating an issue on ace's github page
If you want to remove the indentation on particular line you can do
var range = ace.Range
var cursor = editor.getCursorPosition()
var line = editor.session.getLine(cursor.row)
var match = /^\s*\*\s*/
if (match) {
editor.session.replace(new Range(cursor.row, 0, cursor.row, cursor.column), "")
}
A:
If you are making a custom mode, you can give it a custom $outdent and related methods - see MatchingBraceOutdent, for example. I think the full set of indentation-related Mode methods is:
getNextLineIndent
checkOutdent
autoOutdent
|
Ace editor - how do I remove/reset indent?
|
I want to change the behavior of the editor such that when the user presses enter on an empty list bullet, their cursor position is reset to the start of the line (rather than leaving them at the indented amount).
I've tried:
aceEdit.moveCursorTo(rowToUpdate, 0)
aceEdit.getSession().indentRows(rowToUpdate, rowToUpdate, "")
aceEdit.getSession().replace(range(rowToUpdate, 0, rowToUpdate, 0), "")
However, all three of these leave the cursor at the previous indent level. How do I reset the indent level for the line?
Update: adding example.
* list
* list
* list
* <- user presses enter here
_
Cursor is where I placed the underscore above, and can't be reset programmatically to the start of the line using what I listed above. (User can backspace the indents to get back to the start.)
|
[
"You can use editor.setOption(\"enableAutoIndent\", false) to disable automatic indentation.\nIf you want a way to keep autoIndent in all cases except the list, try creating an issue on ace's github page\nIf you want to remove the indentation on particular line you can do\nvar range = ace.Range\nvar cursor = editor.getCursorPosition()\nvar line = editor.session.getLine(cursor.row)\nvar match = /^\\s*\\*\\s*/\nif (match) {\n editor.session.replace(new Range(cursor.row, 0, cursor.row, cursor.column), \"\")\n}\n\n",
"If you are making a custom mode, you can give it a custom $outdent and related methods - see MatchingBraceOutdent, for example. I think the full set of indentation-related Mode methods is:\n\ngetNextLineIndent\ncheckOutdent\nautoOutdent\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"ace_editor"
] |
stackoverflow_0074460067_ace_editor.txt
|
Q:
Create procedure COPY INTO statement with column names from external storage
I would like to use the COPY INTO statement to copy a table with it's column names from an external storage.
Would like to achieve it using a generic procedure which can use for different tables.
Here below you find a draft. The input parameters could be temp_file_path, schema, table name and list of column names of destination table.
Variable temp_file_path is location of files in Azure data lake(dl).
The variable on column list is a placeholder. Need to know how we can implement it.
Other suggestions are welcome.
CREATE PROC [copy_into_sql_from_dl_columns]
@temp_file_path [VARCHAR](4096)
, @dest_schema [VARCHAR](255)
, @dest_table [VARCHAR](255)
, @dest_columns [VARCHAR](255)
AS
IF @temp_file_path IS NULL OR @dest_schema IS NULL OR @dest_table IS NULL OR @dest_columns IS NULL
BEGIN
PRINT 'ERROR: You must specify temp_file_path, destination schema, table and column names.'
END
ELSE
BEGIN
DECLARE @dest_temp_table AS VARCHAR(4096)
SET @dest_temp_table = '['+@dest_schema + '].[' + @dest_table + ']'
-- set target column names into a target temp variable @dest_temp_columns_list
DECLARE @copy_into_query AS VARCHAR(8000)
SET @copy_into_query = 'COPY INTO ' + @dest_temp_table + ' ('+ @dest_temp_columns +')'+' FROM ''' @temp_file_path + ''' WITH (FILE_TYPE = ''parquet'', AUTO_CREATE_TABLE = ''OFF'' ) ';
EXEC (@copy_into_query)
END
GO
Environment is Azure cloud synapse
DB is SQL dedicated pool (Azure Synapse Analytics)
A:
Can you use SELECT INTO?
SELECT *
INTO newtable [IN externaldb]
FROM oldtable
WHERE condition;
|
Create procedure COPY INTO statement with column names from external storage
|
I would like to use the COPY INTO statement to copy a table with it's column names from an external storage.
Would like to achieve it using a generic procedure which can use for different tables.
Here below you find a draft. The input parameters could be temp_file_path, schema, table name and list of column names of destination table.
Variable temp_file_path is location of files in Azure data lake(dl).
The variable on column list is a placeholder. Need to know how we can implement it.
Other suggestions are welcome.
CREATE PROC [copy_into_sql_from_dl_columns]
@temp_file_path [VARCHAR](4096)
, @dest_schema [VARCHAR](255)
, @dest_table [VARCHAR](255)
, @dest_columns [VARCHAR](255)
AS
IF @temp_file_path IS NULL OR @dest_schema IS NULL OR @dest_table IS NULL OR @dest_columns IS NULL
BEGIN
PRINT 'ERROR: You must specify temp_file_path, destination schema, table and column names.'
END
ELSE
BEGIN
DECLARE @dest_temp_table AS VARCHAR(4096)
SET @dest_temp_table = '['+@dest_schema + '].[' + @dest_table + ']'
-- set target column names into a target temp variable @dest_temp_columns_list
DECLARE @copy_into_query AS VARCHAR(8000)
SET @copy_into_query = 'COPY INTO ' + @dest_temp_table + ' ('+ @dest_temp_columns +')'+' FROM ''' @temp_file_path + ''' WITH (FILE_TYPE = ''parquet'', AUTO_CREATE_TABLE = ''OFF'' ) ';
EXEC (@copy_into_query)
END
GO
Environment is Azure cloud synapse
DB is SQL dedicated pool (Azure Synapse Analytics)
|
[
"Can you use SELECT INTO?\nSELECT *\nINTO newtable [IN externaldb]\nFROM oldtable\nWHERE condition;\n\n"
] |
[
0
] |
[] |
[] |
[
"azure",
"azure_synapse",
"sql_server"
] |
stackoverflow_0074647953_azure_azure_synapse_sql_server.txt
|
Q:
Button is not clickable by Selenuim (Python)
I have a script that uses Selenium (Python).
I tried to make the code click a button that it acknowledges is clickable, but throws an error stating it;s not clickable.
Same thing happens again in a dropdown menu, but this time I'm not clicking, but selecting an option by value.
from selenium import webdriver
from webdriver_manager.chrome import ChromeDriverManager
from selenium.webdriver.common.by import By
from selenium.webdriver.support.select import *
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
# Getting to Chrome and website
website = 'https://www.padron.gob.ar/publica/'
driver = webdriver.Chrome(ChromeDriverManager().install())
driver.get(website)
driver.maximize_window()
#"BUENOS AIRES" in "Distrito Electoral"
distritoElectoralOptions = driver.find_element(By.NAME, 'site')
Select(distritoElectoralOptions).select_by_value('02 ')
#Clicking "Consulta por Zona
WebDriverWait(driver, 35).until(EC.element_to_be_clickable((By.ID, 'lired')))
consultaPorZona = driver.find_element(By.ID, 'lired')
consultaPorZona.click()
#"SEC_8" in "Sección General Electoral"
WebDriverWait(driver, 35).until(EC.visibility_of((By.NAME, 'secg')))
seccionGeneralElectoral = driver.find_element(By.NAME, 'secg')
Select(seccionGeneralElectoral).select_by_value('00008')
I'm getting this error on line 21:
selenium.common.exceptions.ElementNotInteractableException: Message: element not interactable: element has zero size
It works in a ipython notebook if each section is separated, but only if the option "Run all" is not used. Instead, the kernel has to be run on it's own.
I'm using VS Code.
Also, when it reaches the last line, when run in ipynb format, it throws this error:
Message: Cannot locate option with value: 00008
Thank you in advance.
A:
When a web element is present in the HTML-DOM but it is not in the state that can be interacted. Other words, when the element is found but we can’t interact with it, it throws ElementNotInteractableException.
The element not interactable exception may occur due to various reasons.
Element is not visible
Element is present off-screen (After scrolling down it will display)
Element is present behind any other element
Element is disabled
If the element is not visible then wait until element is visible. For this we will use wait command in selenium
wait = WebDriverWait(driver, 10)
element = wait.until(EC.element_to_be_clickable((By.ID, 'someid')))
element.click()
If the element is off-screen then we need to scroll down the browser and interact with the element.
Use the execute_script() interface that helps to execute JavaScript methods through Selenium Webdriver.
browser = webdriver.Firefox()
browser.get("https://en.wikipedia.org")
browser.execute_script("window.scrollTo(0,1000)") //scroll 1000 pixel vertical
Reference this solution in to the regarding problem.
|
Button is not clickable by Selenuim (Python)
|
I have a script that uses Selenium (Python).
I tried to make the code click a button that it acknowledges is clickable, but throws an error stating it;s not clickable.
Same thing happens again in a dropdown menu, but this time I'm not clicking, but selecting an option by value.
from selenium import webdriver
from webdriver_manager.chrome import ChromeDriverManager
from selenium.webdriver.common.by import By
from selenium.webdriver.support.select import *
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
# Getting to Chrome and website
website = 'https://www.padron.gob.ar/publica/'
driver = webdriver.Chrome(ChromeDriverManager().install())
driver.get(website)
driver.maximize_window()
#"BUENOS AIRES" in "Distrito Electoral"
distritoElectoralOptions = driver.find_element(By.NAME, 'site')
Select(distritoElectoralOptions).select_by_value('02 ')
#Clicking "Consulta por Zona
WebDriverWait(driver, 35).until(EC.element_to_be_clickable((By.ID, 'lired')))
consultaPorZona = driver.find_element(By.ID, 'lired')
consultaPorZona.click()
#"SEC_8" in "Sección General Electoral"
WebDriverWait(driver, 35).until(EC.visibility_of((By.NAME, 'secg')))
seccionGeneralElectoral = driver.find_element(By.NAME, 'secg')
Select(seccionGeneralElectoral).select_by_value('00008')
I'm getting this error on line 21:
selenium.common.exceptions.ElementNotInteractableException: Message: element not interactable: element has zero size
It works in a ipython notebook if each section is separated, but only if the option "Run all" is not used. Instead, the kernel has to be run on it's own.
I'm using VS Code.
Also, when it reaches the last line, when run in ipynb format, it throws this error:
Message: Cannot locate option with value: 00008
Thank you in advance.
|
[
"When a web element is present in the HTML-DOM but it is not in the state that can be interacted. Other words, when the element is found but we can’t interact with it, it throws ElementNotInteractableException.\nThe element not interactable exception may occur due to various reasons.\n\nElement is not visible\nElement is present off-screen (After scrolling down it will display)\nElement is present behind any other element\nElement is disabled\n\nIf the element is not visible then wait until element is visible. For this we will use wait command in selenium\nwait = WebDriverWait(driver, 10)\nelement = wait.until(EC.element_to_be_clickable((By.ID, 'someid')))\nelement.click()\n\nIf the element is off-screen then we need to scroll down the browser and interact with the element.\nUse the execute_script() interface that helps to execute JavaScript methods through Selenium Webdriver.\nbrowser = webdriver.Firefox()\nbrowser.get(\"https://en.wikipedia.org\")\nbrowser.execute_script(\"window.scrollTo(0,1000)\") //scroll 1000 pixel vertical\n\nReference this solution in to the regarding problem.\n"
] |
[
0
] |
[] |
[] |
[
"python",
"selenium",
"web_scraping"
] |
stackoverflow_0074657899_python_selenium_web_scraping.txt
|
Q:
How to change the order of the group in the graph?
library(tidyverse)
df <- read.table(text = "Name WT Mutant
'cellular process' 200 2
'Biological phase' 150 5
'cell process' 100 9", header = TRUE)
df %>%
pivot_longer(-Name) %>%
ggplot(aes(x = Name, y = value, fill = name)) +
geom_col(position = position_stack()) +
coord_flip() +
facet_wrap(~name)
Using the above code i am unable to change the group position in the graph.
I need WT group first and Mutant group second. Thanks for your help in advance.
I need to change the position of the group as above stated. Many thanks for the help.
A:
One solution is to set levels in name
df %>%
pivot_longer(-Name) %>%
mutate(name = factor(name, levels=c("WT", "Mutant"))) %>%
ggplot(aes(x = Name, y = value, fill = name)) +
geom_col(position = position_stack()) +
coord_flip() +
facet_wrap(~name)
|
How to change the order of the group in the graph?
|
library(tidyverse)
df <- read.table(text = "Name WT Mutant
'cellular process' 200 2
'Biological phase' 150 5
'cell process' 100 9", header = TRUE)
df %>%
pivot_longer(-Name) %>%
ggplot(aes(x = Name, y = value, fill = name)) +
geom_col(position = position_stack()) +
coord_flip() +
facet_wrap(~name)
Using the above code i am unable to change the group position in the graph.
I need WT group first and Mutant group second. Thanks for your help in advance.
I need to change the position of the group as above stated. Many thanks for the help.
|
[
"One solution is to set levels in name\ndf %>%\n pivot_longer(-Name) %>%\n mutate(name = factor(name, levels=c(\"WT\", \"Mutant\"))) %>% \n ggplot(aes(x = Name, y = value, fill = name)) +\n geom_col(position = position_stack()) +\n coord_flip() +\n facet_wrap(~name)\n\n\n"
] |
[
1
] |
[] |
[] |
[
"ggplot2",
"r"
] |
stackoverflow_0074658409_ggplot2_r.txt
|
Q:
How can I intercalate two list of elements which are not comparable?
Im trying to implement a method called "intercalate" which takes 2 list 'xs' and 'ys' and returns the intercalate of those both lists. However, the difficulty of this exercise is that the elements of those list cannot be comparated between them. The only informaticon is that they are leafs of a tree with some index. For example:
enter image description here
Here the 2 lists would be {a,e,c,g} and {b,f,d,h} and it should returns {a,b,c,d,e,f,g,h}
if you notice, the index of the elements follow always the same patron. I mean if there were 4 elements in total, the index would be 0 , 2, 1 , 4.
Im letting here the methods and functions which TreeVector has, so that maybe you can help me better.
enter image description here
It should be noted that "Tree" is an interface which can be implemented by a Leaf or a Node. This is the interface:
enter image description here
At first, I tried to insert all elements in a LinkedList like this:
protected static <E> List<E> intercalate(List<E> xs, List<E> ys) {
List<E> auxiliar = new LinkedList<>();
int i = 0;
for (E elem : xs) {
auxiliar.insert(i,elem);
i++;
}
for (E elem : ys) {
auxiliar.insert(i,elem);
i++;
}
return auxiliar;
}
However I noticed that a LinkedList doesnt insert the elements in order of a Tree, so now I dont know what to do.
A:
One way to implement the intercalate() method that takes two lists xs and ys and returns the intercalate of those lists in the order of a tree, is to iterate over the elements in each list and add them to a new list in the correct order based on their indices.
Here is an example of how you can implement the intercalate() method:
protected static <E> List<E> intercalate(List<E> xs, List<E> ys) {
// Create a new list to store the intercalated elements
List<E> intercalated = new ArrayList<>();
// Iterate over the elements in the xs list
for (E elem : xs) {
// Get the index of the element
int index = elem.getIndex();
// Add the element to the intercalated list at the correct index based on the tree order
intercalated.add(index, elem);
}
// Iterate over the elements in the ys list
for (E elem : ys) {
// Get the index of the element
int index = elem.getIndex();
// Add the element to the intercalated list at the correct index based on the tree order
intercalated.add(index, elem);
}
// Return the intercalated list
return intercalated;
}
In the above example, the intercalate() method iterates over the elements in the xs and ys lists, gets their indices using the getIndex() method, and adds them to a new list in the correct order based on the tree order.
|
How can I intercalate two list of elements which are not comparable?
|
Im trying to implement a method called "intercalate" which takes 2 list 'xs' and 'ys' and returns the intercalate of those both lists. However, the difficulty of this exercise is that the elements of those list cannot be comparated between them. The only informaticon is that they are leafs of a tree with some index. For example:
enter image description here
Here the 2 lists would be {a,e,c,g} and {b,f,d,h} and it should returns {a,b,c,d,e,f,g,h}
if you notice, the index of the elements follow always the same patron. I mean if there were 4 elements in total, the index would be 0 , 2, 1 , 4.
Im letting here the methods and functions which TreeVector has, so that maybe you can help me better.
enter image description here
It should be noted that "Tree" is an interface which can be implemented by a Leaf or a Node. This is the interface:
enter image description here
At first, I tried to insert all elements in a LinkedList like this:
protected static <E> List<E> intercalate(List<E> xs, List<E> ys) {
List<E> auxiliar = new LinkedList<>();
int i = 0;
for (E elem : xs) {
auxiliar.insert(i,elem);
i++;
}
for (E elem : ys) {
auxiliar.insert(i,elem);
i++;
}
return auxiliar;
}
However I noticed that a LinkedList doesnt insert the elements in order of a Tree, so now I dont know what to do.
|
[
"One way to implement the intercalate() method that takes two lists xs and ys and returns the intercalate of those lists in the order of a tree, is to iterate over the elements in each list and add them to a new list in the correct order based on their indices.\nHere is an example of how you can implement the intercalate() method:\nprotected static <E> List<E> intercalate(List<E> xs, List<E> ys) {\n\n// Create a new list to store the intercalated elements\nList<E> intercalated = new ArrayList<>();\n\n// Iterate over the elements in the xs list\nfor (E elem : xs) {\n // Get the index of the element\n int index = elem.getIndex();\n\n // Add the element to the intercalated list at the correct index based on the tree order\n intercalated.add(index, elem);\n}\n\n// Iterate over the elements in the ys list\nfor (E elem : ys) {\n // Get the index of the element\n int index = elem.getIndex();\n\n // Add the element to the intercalated list at the correct index based on the tree order\n intercalated.add(index, elem);\n}\n\n// Return the intercalated list\nreturn intercalated;\n\n}\nIn the above example, the intercalate() method iterates over the elements in the xs and ys lists, gets their indices using the getIndex() method, and adds them to a new list in the correct order based on the tree order.\n"
] |
[
0
] |
[] |
[] |
[
"insert",
"java",
"list",
"tree",
"vector"
] |
stackoverflow_0074658436_insert_java_list_tree_vector.txt
|
Q:
How can i use redirect()->route('test', $id) after create a data in Laravel?
The store function
The Route
I try to store a data to database and than i want to redirect the page with the created data ID. I use return redirect()->route('test', $id) but its not working.
A:
In your route the parameter is id but in your controller you are passing it as id_pemakaian. So just need to change it to id :
return redirect()->route('inventoryPemakaianDetail', ['id' => Main::encrypt($pemakaian->id_pemakaian)]);
A:
You need to use this code it work fine.
return redirect(route('test', ['id' => $id]));
|
How can i use redirect()->route('test', $id) after create a data in Laravel?
|
The store function
The Route
I try to store a data to database and than i want to redirect the page with the created data ID. I use return redirect()->route('test', $id) but its not working.
|
[
"In your route the parameter is id but in your controller you are passing it as id_pemakaian. So just need to change it to id :\nreturn redirect()->route('inventoryPemakaianDetail', ['id' => Main::encrypt($pemakaian->id_pemakaian)]);\n\n",
"You need to use this code it work fine.\nreturn redirect(route('test', ['id' => $id]));\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"laravel",
"laravel_5"
] |
stackoverflow_0074658298_laravel_laravel_5.txt
|
Q:
best way to speed up multiprocessing code in python?
I am trying to mess around with matrices in python, and wanted to use multiprocessing to processes each row separately for a math operation, I have posted a minimal reproducible sample below, but keep in mind that for my actual code I do in-fact need the entire matrix passed to the helper function. This sample takes literally forever to process a 10,000 by 10,000 matrix. Almost 2 hours with 9 processes. Looking in task manage it seems only 4-5 of the threads will run at any given time on my cpu, and the application never uses more than 25%. I've done my absolute best to avoid branches in my real code, though the sample provided is branchless. It still takes roughly 25 seconds to process a 1000 by 1000 matrix on my machine, which is ludacris to me as a mainly c++ developer. I wrote serial code in C that executes the entire 10,000 by 10,000 in constant time in less than a second. I think the main bottleneck is the multiprocessing code, but I am required to do this with multiprocessing. Any ideas for how I could go about improving this? Each row can be processed entirely separately but they need to be joined together back into a matrix for my actual code.
import random
from multiprocessing import Pool
import time
def addMatrixRow(matrixData):
matrix = matrixData[0]
rowNum = matrixData[1]
del (matrixData)
rowSum = 0
for colNum in range(len(matrix[rowNum])):
rowSum += matrix[rowNum][colNum]
return rowSum
def genMatrix(row, col):
matrix = list()
for i in range(row):
matrix.append(list())
for j in range(col):
matrix[i].append(random.randint(0, 1))
return matrix
def main():
matrix = genMatrix(1000, 1000)
print("generated matrix")
MAX_PROCESSES = 4
finalSum = 0
processPool = Pool(processes=MAX_PROCESSES)
poolData = list()
start = time.time()
for i in range(100):
for rowNum in range(len(matrix)):
matrixData = [matrix, rowNum]
poolData.append(matrixData)
finalData = processPool.map(addMatrixRow, poolData)
poolData = list()
finalSum += sum(finalData)
end = time.time()
print(end-start)
print(f'final sum {finalSum}')
if __name__ == '__main__':
main()
A:
Your matrix has 1000 rows of 1000 elements each and you are summing each row 100 times. By my calculation, that is 100,000 tasks you are submitting to the pool passing a one-million element matrix each time. Ouch!
Now I know you say that the worker function addMatrixRow must have access to the complete matrix. Fine. But instead of passing it a 100,000 times, you can reduce that to 4 times by initializing each process in the pool with a global variable set to the matrix using the initializer and initargs arguments when you construct the pool. You are able to get away with this because the matrix is read-only.
And instead of creating poolArgs as a large list you can instead create a generator function that when iterated returns the next argument to be submitted to the pool. But to take advantage of this you cannot use the map method, which will convert the generator to a list and not save you any memory. Instead use imap_unordered (rather than imap since you do not care now in what order your worker function is returning its results because of the commutative law of addition). But with such a large input, you should be using the chunksize argument with imap_unordered. So that the number of reads and writes to the pool's task queue is greatly reduced(albeit the size of the data being written is larger for each queue operation).
If all of this is somewhat vague to you, I suggest reading the docs thoroughly for class multiprocessing.pool.Pool and its imap and imap_unordered methods.
I have made a few other optimizations replacing for loops with list comprehensions and using the built-in sum function.
import random
from multiprocessing import Pool
import time
def init_pool_processes(m):
global matrix
matrix = m
def addMatrixRow(rowNum):
return sum(matrix[rowNum])
def genMatrix(row, col):
return [[random.randint(0, 1) for _ in range(col)] for _ in range(row)]
def compute_chunksize(pool_size, iterable_size):
chunksize, remainder = divmod(iterable_size, 4 * pool_size)
if remainder:
chunksize += 1
return chunksize
def main():
matrix = genMatrix(1000, 1000)
print("generated matrix")
MAX_PROCESSES = 4
processPool = Pool(processes=MAX_PROCESSES, initializer=init_pool_processes, initargs=(matrix,))
start = time.time()
# Use a generator function:
poolData = (rowNum for _ in range(100) for rowNum in range(len(matrix)))
# Compute efficient chunksize
chunksize = compute_chunksize(MAX_PROCESSES, len(matrix) * 100)
finalSum = sum(processPool.imap_unordered(addMatrixRow, poolData, chunksize=chunksize))
end = time.time()
print(end-start)
print(f'final sum {finalSum}')
processPool.close()
processPool.join()
if __name__ == '__main__':
main()
Prints:
generated matrix
0.35799622535705566
final sum 49945400
Note the running time of .36 seconds.
Assuming you have more CPU cores (than 4), use them all for an even greater reduction in time.
A:
you are serializing the entire matrix on each function call, you should only send the data that you are processing to the function, nothing more ... and python has a built-in sum function that has a very optimized C code.
import random
from multiprocessing import Pool
import time
def addMatrixRow(row_data):
rowSum = sum(row_data)
return rowSum
def genMatrix(row, col):
matrix = list()
for i in range(row):
matrix.append(list())
for j in range(col):
matrix[i].append(random.randint(0, 1))
return matrix
def main():
matrix = genMatrix(1000, 1000)
print("generated matrix")
MAX_PROCESSES = 4
finalSum = 0
processPool = Pool(processes=MAX_PROCESSES)
poolData = list()
start = time.time()
for i in range(100):
for rowNum in range(len(matrix)):
matrixData = matrix[rowNum]
poolData.append(matrixData)
finalData = processPool.map(addMatrixRow, poolData)
poolData = list()
finalSum += sum(finalData)
end = time.time()
print(end-start)
print(f'final sum {finalSum}')
if __name__ == '__main__':
main()
generated matrix
3.5028157234191895
final sum 49963400
just not using process pool and running the code serially using list(map(sum,poolData))
generated matrix
1.2143816947937012
final sum 50020800
so yeh python can do it in a second.
|
best way to speed up multiprocessing code in python?
|
I am trying to mess around with matrices in python, and wanted to use multiprocessing to processes each row separately for a math operation, I have posted a minimal reproducible sample below, but keep in mind that for my actual code I do in-fact need the entire matrix passed to the helper function. This sample takes literally forever to process a 10,000 by 10,000 matrix. Almost 2 hours with 9 processes. Looking in task manage it seems only 4-5 of the threads will run at any given time on my cpu, and the application never uses more than 25%. I've done my absolute best to avoid branches in my real code, though the sample provided is branchless. It still takes roughly 25 seconds to process a 1000 by 1000 matrix on my machine, which is ludacris to me as a mainly c++ developer. I wrote serial code in C that executes the entire 10,000 by 10,000 in constant time in less than a second. I think the main bottleneck is the multiprocessing code, but I am required to do this with multiprocessing. Any ideas for how I could go about improving this? Each row can be processed entirely separately but they need to be joined together back into a matrix for my actual code.
import random
from multiprocessing import Pool
import time
def addMatrixRow(matrixData):
matrix = matrixData[0]
rowNum = matrixData[1]
del (matrixData)
rowSum = 0
for colNum in range(len(matrix[rowNum])):
rowSum += matrix[rowNum][colNum]
return rowSum
def genMatrix(row, col):
matrix = list()
for i in range(row):
matrix.append(list())
for j in range(col):
matrix[i].append(random.randint(0, 1))
return matrix
def main():
matrix = genMatrix(1000, 1000)
print("generated matrix")
MAX_PROCESSES = 4
finalSum = 0
processPool = Pool(processes=MAX_PROCESSES)
poolData = list()
start = time.time()
for i in range(100):
for rowNum in range(len(matrix)):
matrixData = [matrix, rowNum]
poolData.append(matrixData)
finalData = processPool.map(addMatrixRow, poolData)
poolData = list()
finalSum += sum(finalData)
end = time.time()
print(end-start)
print(f'final sum {finalSum}')
if __name__ == '__main__':
main()
|
[
"Your matrix has 1000 rows of 1000 elements each and you are summing each row 100 times. By my calculation, that is 100,000 tasks you are submitting to the pool passing a one-million element matrix each time. Ouch!\nNow I know you say that the worker function addMatrixRow must have access to the complete matrix. Fine. But instead of passing it a 100,000 times, you can reduce that to 4 times by initializing each process in the pool with a global variable set to the matrix using the initializer and initargs arguments when you construct the pool. You are able to get away with this because the matrix is read-only.\nAnd instead of creating poolArgs as a large list you can instead create a generator function that when iterated returns the next argument to be submitted to the pool. But to take advantage of this you cannot use the map method, which will convert the generator to a list and not save you any memory. Instead use imap_unordered (rather than imap since you do not care now in what order your worker function is returning its results because of the commutative law of addition). But with such a large input, you should be using the chunksize argument with imap_unordered. So that the number of reads and writes to the pool's task queue is greatly reduced(albeit the size of the data being written is larger for each queue operation).\nIf all of this is somewhat vague to you, I suggest reading the docs thoroughly for class multiprocessing.pool.Pool and its imap and imap_unordered methods.\nI have made a few other optimizations replacing for loops with list comprehensions and using the built-in sum function.\nimport random\nfrom multiprocessing import Pool\nimport time\n\n\ndef init_pool_processes(m):\n global matrix\n matrix = m \n\ndef addMatrixRow(rowNum):\n return sum(matrix[rowNum])\n\ndef genMatrix(row, col):\n return [[random.randint(0, 1) for _ in range(col)] for _ in range(row)]\n \ndef compute_chunksize(pool_size, iterable_size):\n chunksize, remainder = divmod(iterable_size, 4 * pool_size)\n if remainder:\n chunksize += 1\n return chunksize\n\ndef main():\n matrix = genMatrix(1000, 1000)\n print(\"generated matrix\")\n MAX_PROCESSES = 4\n\n processPool = Pool(processes=MAX_PROCESSES, initializer=init_pool_processes, initargs=(matrix,))\n start = time.time()\n # Use a generator function:\n poolData = (rowNum for _ in range(100) for rowNum in range(len(matrix)))\n # Compute efficient chunksize\n chunksize = compute_chunksize(MAX_PROCESSES, len(matrix) * 100)\n finalSum = sum(processPool.imap_unordered(addMatrixRow, poolData, chunksize=chunksize))\n end = time.time()\n print(end-start)\n print(f'final sum {finalSum}')\n processPool.close()\n processPool.join()\n\n\nif __name__ == '__main__':\n main()\n\nPrints:\ngenerated matrix\n0.35799622535705566\nfinal sum 49945400\n\nNote the running time of .36 seconds.\nAssuming you have more CPU cores (than 4), use them all for an even greater reduction in time.\n",
"you are serializing the entire matrix on each function call, you should only send the data that you are processing to the function, nothing more ... and python has a built-in sum function that has a very optimized C code.\nimport random\nfrom multiprocessing import Pool\nimport time\n\n\ndef addMatrixRow(row_data):\n rowSum = sum(row_data)\n return rowSum\n\n\ndef genMatrix(row, col):\n matrix = list()\n for i in range(row):\n matrix.append(list())\n for j in range(col):\n matrix[i].append(random.randint(0, 1))\n return matrix\n\ndef main():\n matrix = genMatrix(1000, 1000)\n print(\"generated matrix\")\n MAX_PROCESSES = 4\n finalSum = 0\n\n processPool = Pool(processes=MAX_PROCESSES)\n poolData = list()\n\n start = time.time()\n for i in range(100):\n for rowNum in range(len(matrix)):\n matrixData = matrix[rowNum]\n poolData.append(matrixData)\n\n finalData = processPool.map(addMatrixRow, poolData)\n poolData = list()\n finalSum += sum(finalData)\n end = time.time()\n print(end-start)\n print(f'final sum {finalSum}')\n\n\nif __name__ == '__main__':\n main()\n\ngenerated matrix\n3.5028157234191895\nfinal sum 49963400\n\njust not using process pool and running the code serially using list(map(sum,poolData))\ngenerated matrix\n1.2143816947937012\nfinal sum 50020800\n\nso yeh python can do it in a second.\n"
] |
[
3,
1
] |
[] |
[] |
[
"matrix",
"multiprocessing",
"process_pool",
"python"
] |
stackoverflow_0074646298_matrix_multiprocessing_process_pool_python.txt
|
Q:
where's the subtractive in new unreal engine 5
Hy guys, does someone know where is the subtractive mode in unreal engine 5?
I'm trying to transform a simple box from additive to subtractive but it's different from Unreal engine 4.
A:
Just click on your geometry,
In the Detail section.
you can find Brush Type By default it must be set on Additive .
Now you can change.
|
where's the subtractive in new unreal engine 5
|
Hy guys, does someone know where is the subtractive mode in unreal engine 5?
I'm trying to transform a simple box from additive to subtractive but it's different from Unreal engine 4.
|
[
"\nJust click on your geometry,\nIn the Detail section.\nyou can find Brush Type By default it must be set on Additive .\nNow you can change.\n\n"
] |
[
0
] |
[] |
[] |
[
"box",
"subtraction",
"unreal_engine5"
] |
stackoverflow_0074492103_box_subtraction_unreal_engine5.txt
|
Q:
Elixir: VS Code ExUnit cannot find Mix
I cannot load or run my tests, from within VS Code.
I'm a new user to Elixir, and to VS Code. I'm running Lubuntu 21.10 (Impish). I've downloaded Erlang/OTP 25 (.deb), and Elixir 1.14 (precompiled binary in /usr/share/elixir), and can get anything I need running in a Bash terminal. Again, in a standard QTerminal window,
erl, iex, mix, elixir, etc. all work fine.
In VS Code, however, I get some errors. I feel stupid, but I'm coming from Sublime Text, so please forgive me.
In the left pane of VS Code, ExUnit shows an error (red):
Clicking on this error gives me this, on the bottom right pane. The command line options, passed to mix test, seem to be the default configuration:
This result is bizarre to me, because I can open the integrated terminal, execute /bin/sh, and then run the exact mix test line that's displayed:
/usr/share/elixir/bin has been added to my PATH variable, in ~/.bashrc, ~/.profile, and /etc/environment.
However, I am further confused by all tests being excluded, and wonder if there's some connection to the core issue:
Note that I can run my tests just fine, using different command line options. I've tried adding tags, but that didn't fix the problem.
I tried Google'ing this, and played around with my settings. Here is what I have configured in the "User" settings.json, and I made sure nothing overrides this in "Workspace" settings:
Changing the useNativeTesting setting doesn't solve the problem.
On another (?) note, I get a "failed to run elixir" upon VS Code startup:
Again, I have no problem running commands from a Linux terminal, or from a terminal within VS Code.
Plot twist: If I remove the precompiled Elixir 1.14, and downgrade to an older version, via apt, the problem goes away. But Lubuntu 21.10 doesn't offer Elixir 1.14, and I'm really into using the new dbg() feature.
But for now, I cannot load or run my tests, from within VS Code, apparently because Mix cannot be found.
A:
Thanks to Daniel Imms, from the VS Code team, for answering my question on Twitter:
"Try moving where ever you init mix and elixir (.bashrc?) into your .bash_profile and then logging out and in again or restarting. I'm guessing it's in your bashrc which doesn't run in non-interactive sessions like in tasks."
|
Elixir: VS Code ExUnit cannot find Mix
|
I cannot load or run my tests, from within VS Code.
I'm a new user to Elixir, and to VS Code. I'm running Lubuntu 21.10 (Impish). I've downloaded Erlang/OTP 25 (.deb), and Elixir 1.14 (precompiled binary in /usr/share/elixir), and can get anything I need running in a Bash terminal. Again, in a standard QTerminal window,
erl, iex, mix, elixir, etc. all work fine.
In VS Code, however, I get some errors. I feel stupid, but I'm coming from Sublime Text, so please forgive me.
In the left pane of VS Code, ExUnit shows an error (red):
Clicking on this error gives me this, on the bottom right pane. The command line options, passed to mix test, seem to be the default configuration:
This result is bizarre to me, because I can open the integrated terminal, execute /bin/sh, and then run the exact mix test line that's displayed:
/usr/share/elixir/bin has been added to my PATH variable, in ~/.bashrc, ~/.profile, and /etc/environment.
However, I am further confused by all tests being excluded, and wonder if there's some connection to the core issue:
Note that I can run my tests just fine, using different command line options. I've tried adding tags, but that didn't fix the problem.
I tried Google'ing this, and played around with my settings. Here is what I have configured in the "User" settings.json, and I made sure nothing overrides this in "Workspace" settings:
Changing the useNativeTesting setting doesn't solve the problem.
On another (?) note, I get a "failed to run elixir" upon VS Code startup:
Again, I have no problem running commands from a Linux terminal, or from a terminal within VS Code.
Plot twist: If I remove the precompiled Elixir 1.14, and downgrade to an older version, via apt, the problem goes away. But Lubuntu 21.10 doesn't offer Elixir 1.14, and I'm really into using the new dbg() feature.
But for now, I cannot load or run my tests, from within VS Code, apparently because Mix cannot be found.
|
[
"Thanks to Daniel Imms, from the VS Code team, for answering my question on Twitter:\n\"Try moving where ever you init mix and elixir (.bashrc?) into your .bash_profile and then logging out and in again or restarting. I'm guessing it's in your bashrc which doesn't run in non-interactive sessions like in tasks.\"\n"
] |
[
0
] |
[] |
[] |
[
"elixir",
"elixir_mix",
"ex_unit",
"visual_studio_code",
"vscode_extensions"
] |
stackoverflow_0074633193_elixir_elixir_mix_ex_unit_visual_studio_code_vscode_extensions.txt
|
Q:
How to configure Spring Cloud Configuration Server without the git profile?
I'm attempting to run Spring Cloud Configuration Server, working through the examples in a book (Manning's Spring Microservices in Action), but updating to the latest versions: Java 17, spring-boot-starter-parent 2.6.1, with Spring Cloud 2021.0.0-RC1.
Each time I try to start the server, I get this error:
***************************
APPLICATION FAILED TO START
***************************
Description:
Invalid config server configuration.
Action:
If you are using the git profile, you need to set a Git URI in your configuration. If you have set spring.cloud.config.server.bootstrap=true, you need to use a composite configuration.
I am not using the git profile. I have tried two different profiles: native (with config files on the classpath) and vault (with a Hashicorp Vault server running locally). My latest /src/resources/bootstrap.yml contains the following:
spring:
application:
name: config-server
profiles:
active: vault
cloud:
config:
server:
vault:
port: 8200
host: 127.0.0.1
kvVersion: 2
server:
port: 8071
My best guess is that the bootstrap.yml file isn't getting picked up at server startup, and perhaps the git profile is a default. How can I remedy this?
A:
OK, it looks like the problem here is that newer versions of Spring Cloud Configuration Server don't look for the bootstrap.yml file by default. There are a few different ways to solve it. The easiest is just to move all the properties to an application.yml/application.properties instead.
Another alternative is (found at NEWBEDEV here) is to include a dependency that implements the "legacy" bootstrap behavior:
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-bootstrap</artifactId>
</dependency>
A:
add the following into application.properties file.
spring.application.name=techefx-spring-cloud-config-server
spring.cloud.config.server.git.uri=https://github.com/techefx/environment-variable-repo.git
server.port= ${port:8888}
A:
Please go through the link below:
Spring Cloud Config File System Backend Issue (not reading properties from the file)
|
How to configure Spring Cloud Configuration Server without the git profile?
|
I'm attempting to run Spring Cloud Configuration Server, working through the examples in a book (Manning's Spring Microservices in Action), but updating to the latest versions: Java 17, spring-boot-starter-parent 2.6.1, with Spring Cloud 2021.0.0-RC1.
Each time I try to start the server, I get this error:
***************************
APPLICATION FAILED TO START
***************************
Description:
Invalid config server configuration.
Action:
If you are using the git profile, you need to set a Git URI in your configuration. If you have set spring.cloud.config.server.bootstrap=true, you need to use a composite configuration.
I am not using the git profile. I have tried two different profiles: native (with config files on the classpath) and vault (with a Hashicorp Vault server running locally). My latest /src/resources/bootstrap.yml contains the following:
spring:
application:
name: config-server
profiles:
active: vault
cloud:
config:
server:
vault:
port: 8200
host: 127.0.0.1
kvVersion: 2
server:
port: 8071
My best guess is that the bootstrap.yml file isn't getting picked up at server startup, and perhaps the git profile is a default. How can I remedy this?
|
[
"OK, it looks like the problem here is that newer versions of Spring Cloud Configuration Server don't look for the bootstrap.yml file by default. There are a few different ways to solve it. The easiest is just to move all the properties to an application.yml/application.properties instead.\nAnother alternative is (found at NEWBEDEV here) is to include a dependency that implements the \"legacy\" bootstrap behavior:\n<dependency>\n <groupId>org.springframework.cloud</groupId>\n <artifactId>spring-cloud-starter-bootstrap</artifactId>\n</dependency>\n\n",
"add the following into application.properties file.\nspring.application.name=techefx-spring-cloud-config-server\nspring.cloud.config.server.git.uri=https://github.com/techefx/environment-variable-repo.git\nserver.port= ${port:8888}\n",
"Please go through the link below:\nSpring Cloud Config File System Backend Issue (not reading properties from the file)\n"
] |
[
1,
0,
0
] |
[] |
[] |
[
"spring_cloud_config_server"
] |
stackoverflow_0070204372_spring_cloud_config_server.txt
|
Q:
Pie Chart without "Time" in Apache Superset
Creating a Pie Chart within Apache Superset currently requires a "Time" column, is it possible to plot data without this need for a datetime field?
When plotting something like the result of a poll (e.g. people's favourite food, categories vs count), there is no concept of a datetime field.
A:
Yes, you can create a pie chart without this field (at least in 2.0.0 or higher).
It's a little confusing because if your dataset has a datetime field, it will automatically get added to the Time field and you will be unable to remove it. But it shouldn't affect the chart, and if you try to use a dataset without a datetime field, you can still create pie charts.
In short, I think you can just ignore that field.
|
Pie Chart without "Time" in Apache Superset
|
Creating a Pie Chart within Apache Superset currently requires a "Time" column, is it possible to plot data without this need for a datetime field?
When plotting something like the result of a poll (e.g. people's favourite food, categories vs count), there is no concept of a datetime field.
|
[
"Yes, you can create a pie chart without this field (at least in 2.0.0 or higher).\nIt's a little confusing because if your dataset has a datetime field, it will automatically get added to the Time field and you will be unable to remove it. But it shouldn't affect the chart, and if you try to use a dataset without a datetime field, you can still create pie charts.\nIn short, I think you can just ignore that field.\n"
] |
[
1
] |
[] |
[] |
[
"apache_superset",
"dashboard"
] |
stackoverflow_0074626938_apache_superset_dashboard.txt
|
Q:
How to close popup without reload page?
I have a circle after clicking on which a popup appears . This popup has a button to close it. But it doesn't work the way I want it to. I want the popup to close but not reload the page.
I tried using onclick="this.parentNode.parentNode.remove(); return false;" . But it doesn't work correctly. Please tell me how to do it!
HTML:
<main>
<div class="button">
<input onclick="check()" type="checkbox" name="popup__input" id="popup__input" class="popup__check">
</div>
<div class="popup" name="popup" id="popup">
<label class="popup__label">
<form>
<div class="form__input">
<input type="text" id="fname" name="fname"><br><br>
</div>
<button class="form__button" type="submit" form="nameform" value="Submit">Відправити</button>
<div class="close-button__container">
<button class="close__button">×</button>
</div>
</form>
</label>
</div>
<div class="overlay">
</div>
</main>
CSS:
*,
*::before,
*::after {
margin: 0;
padding: 0;
border: none;
box-sizing: border-box;
}
main {
width: 100%;
height: 100%;
position: relative;
}
.button {
width: 200px;
height: 200px;
border-radius: 100%;
background: linear-gradient(#e66465, #9198e5);
position: absolute;
grid-area: "c";
transition: linear 4s;
}
input[type=text], select {
width: 100%;
height: 60px;
padding: 12px 20px;
margin: 40px 0;
font-size: 60px;
display: flex;
border: 1px solid #ccc;
box-sizing: border-box;
border-radius: 5px;
}
form {
position: absolute;
display: grid;
grid-template-areas:
"a"
"b"
"c";
grid-template-columns: repeat(12, 1fr);
}
.form__input{
display: grid;
grid-area: "b";
grid-column-start: 2;
grid-column-end: 12;
}
.close__button{
width: 50px;
position: absolute;
height: 50px;
cursor: pointer;
border: none;
outline: none;
font-size: 3rem;
font-weight: bold;
background-color: rgba(104, 99, 99, 0);
grid-column-start: 12;
grid-column-end: 12;
}
.close-button__container{
display: grid;
grid-area: "a";
place-items: start middle;
}
.form__button {
width: 100%;
height: 55px;
display: grid;
margin-top: 110px;
position: absolute;
font-size: 35px;
color: #fff;
align-items: center;
background-color:#2962d3;
grid-column-start: 3;
grid-column-end: 11;
border-radius: 5px;
cursor: pointer;
}
.form__button:hover {
background-color: #4a79d6;
transition: 0.5s;
}
.overlay {
display: none;
position: fixed;
top: 0;
left: 0;
right: 0;
bottom: 0;
background: rgba(0, 0, 0, 0.5);
}
.popup {
display: none;
position: fixed;
top: 50%;
left: 50%;
z-index: 10;
transform: translate(-50%, -50%);
border-radius: 10px;
width: 850px;
height: 200px;
background: rgba(104, 99, 99, 0.5);
justify-content: end;
align-items: end;
}
.popup__label {
}
.popup__check {
position: absolute;
width: 100%;
height: 100%;
border-radius: 100%;
cursor: pointer;
z-index: 3;
appearance: none;
-webkit-appearance: none;
-moz-appearance: none;
}
@media (max-width: 1024.98px) {
.button {
width:80px;
height: 80px;
border-radius: 100%;
background: linear-gradient(#e66465, #9198e5);
position: absolute;
transition: linear 4s;
}
}
@media (max-width: 890.98px) {
.popup {
width: 750px;
}
.form__button {
width: 500px;
height: 45px;
grid-column-start: 3;
grid-column-end: 6;
}
}
@media (max-width: 768.98px) {
input[type=text], select {
padding: 12px 20px;
margin: 50px 0;
font-size: 40px;
display: flex;
border: 1px solid #ccc;
box-sizing: border-box;
}
.form__button {
width: 100%;
height: 45px;
grid-column-start: 3;
grid-column-end: 11;
margin-top: 125px;
}
.popup {
width: 600px;
}
}
@media (max-width: 620.98px) {
.popup {
width: 480px;
}
input[type=text], select {
height: 50px;
}
.form__button {
height: 35px;
grid-column-start: 3;
grid-column-end: 11;
font-size: 20px;
}
.close__button{
font-size: 2rem;
}
}
@media (max-width: 507.98px) {
input[type=text], select {
padding: 12px 20px;
margin: 45px 0;
}
.popup {
width: 330px;
}
.close__button{
width: 25px;
height: 25px;
}
}
@media (max-width: 400.98px) {
.form__button {
height: 30px;
position: absolute;
font-size: 15px;
grid-column-start: 3;
grid-column-end: 11;
margin-top: 100px;
}
.popup {
width: 350px;
height: 150px;
}
}
@media (max-width: 358.98px) {
input[type=text], select {
height: 30px;
padding: 12px 20px;
margin: 35px 0px;
}
.form__button {
height: 30px;
margin-top: 80px;
position: absolute;
font-size: 15px;
}
.popup {
width: 280px;
height: 150px;
}
.close__button{
width: 15px;
height: 15px;
}
}
@media (max-width: 300.98px) {
input[type=text], select {
width: 200px;
height: 30px;
padding: 12px 20px;
margin: 35px 0px;
font-size: 20px;
display: flex;
border: 1px solid #ccc;
box-sizing: border-box;
}
.form__button {
height: 30px;
margin-top: 80px;
position: absolute;
font-size: 15px;
}
.popup {
width: 240px;
height: 150px;
}
}
JS:
let elem = document.querySelector('.button');
function check() {
const popup = document.getElementsByClassName('popup');
if (document.getElementById('popup__input').checked = true) {
for (var i=0;i<popup.length;i+=1){
popup[i].style.display = 'block';
}
} else {
popup.style.display = "none";
}
}
const changePosition = () => {
let randX = Math.random();
let randY = Math.random();
const circleSize = {
width: elem.clientWidth,
heigth: elem.clientHeight
};
const windowWidth = window.innerWidth - circleSize.width;
const windowheigth = window.innerHeight - circleSize.heigth;
let randXMult = windowheigth * randX;
let randXP = randXMult + 'px';
let randYMult = windowWidth * randY;
let randYP = randYMult + 'px';
elem.style.top = randXP;
elem.style.left = randYP;
};
setInterval(changePosition, 1000);
A:
Try setting the type of button on the close icon as button and give the button an id of close__button so that we can select with javascript.
<button id="close__button" type="button" class="close__button">×</button>
Grab the button and add a click event listener:
function close() {
const popup = document.getElementById('popup').style.display = "none";
}```
|
How to close popup without reload page?
|
I have a circle after clicking on which a popup appears . This popup has a button to close it. But it doesn't work the way I want it to. I want the popup to close but not reload the page.
I tried using onclick="this.parentNode.parentNode.remove(); return false;" . But it doesn't work correctly. Please tell me how to do it!
HTML:
<main>
<div class="button">
<input onclick="check()" type="checkbox" name="popup__input" id="popup__input" class="popup__check">
</div>
<div class="popup" name="popup" id="popup">
<label class="popup__label">
<form>
<div class="form__input">
<input type="text" id="fname" name="fname"><br><br>
</div>
<button class="form__button" type="submit" form="nameform" value="Submit">Відправити</button>
<div class="close-button__container">
<button class="close__button">×</button>
</div>
</form>
</label>
</div>
<div class="overlay">
</div>
</main>
CSS:
*,
*::before,
*::after {
margin: 0;
padding: 0;
border: none;
box-sizing: border-box;
}
main {
width: 100%;
height: 100%;
position: relative;
}
.button {
width: 200px;
height: 200px;
border-radius: 100%;
background: linear-gradient(#e66465, #9198e5);
position: absolute;
grid-area: "c";
transition: linear 4s;
}
input[type=text], select {
width: 100%;
height: 60px;
padding: 12px 20px;
margin: 40px 0;
font-size: 60px;
display: flex;
border: 1px solid #ccc;
box-sizing: border-box;
border-radius: 5px;
}
form {
position: absolute;
display: grid;
grid-template-areas:
"a"
"b"
"c";
grid-template-columns: repeat(12, 1fr);
}
.form__input{
display: grid;
grid-area: "b";
grid-column-start: 2;
grid-column-end: 12;
}
.close__button{
width: 50px;
position: absolute;
height: 50px;
cursor: pointer;
border: none;
outline: none;
font-size: 3rem;
font-weight: bold;
background-color: rgba(104, 99, 99, 0);
grid-column-start: 12;
grid-column-end: 12;
}
.close-button__container{
display: grid;
grid-area: "a";
place-items: start middle;
}
.form__button {
width: 100%;
height: 55px;
display: grid;
margin-top: 110px;
position: absolute;
font-size: 35px;
color: #fff;
align-items: center;
background-color:#2962d3;
grid-column-start: 3;
grid-column-end: 11;
border-radius: 5px;
cursor: pointer;
}
.form__button:hover {
background-color: #4a79d6;
transition: 0.5s;
}
.overlay {
display: none;
position: fixed;
top: 0;
left: 0;
right: 0;
bottom: 0;
background: rgba(0, 0, 0, 0.5);
}
.popup {
display: none;
position: fixed;
top: 50%;
left: 50%;
z-index: 10;
transform: translate(-50%, -50%);
border-radius: 10px;
width: 850px;
height: 200px;
background: rgba(104, 99, 99, 0.5);
justify-content: end;
align-items: end;
}
.popup__label {
}
.popup__check {
position: absolute;
width: 100%;
height: 100%;
border-radius: 100%;
cursor: pointer;
z-index: 3;
appearance: none;
-webkit-appearance: none;
-moz-appearance: none;
}
@media (max-width: 1024.98px) {
.button {
width:80px;
height: 80px;
border-radius: 100%;
background: linear-gradient(#e66465, #9198e5);
position: absolute;
transition: linear 4s;
}
}
@media (max-width: 890.98px) {
.popup {
width: 750px;
}
.form__button {
width: 500px;
height: 45px;
grid-column-start: 3;
grid-column-end: 6;
}
}
@media (max-width: 768.98px) {
input[type=text], select {
padding: 12px 20px;
margin: 50px 0;
font-size: 40px;
display: flex;
border: 1px solid #ccc;
box-sizing: border-box;
}
.form__button {
width: 100%;
height: 45px;
grid-column-start: 3;
grid-column-end: 11;
margin-top: 125px;
}
.popup {
width: 600px;
}
}
@media (max-width: 620.98px) {
.popup {
width: 480px;
}
input[type=text], select {
height: 50px;
}
.form__button {
height: 35px;
grid-column-start: 3;
grid-column-end: 11;
font-size: 20px;
}
.close__button{
font-size: 2rem;
}
}
@media (max-width: 507.98px) {
input[type=text], select {
padding: 12px 20px;
margin: 45px 0;
}
.popup {
width: 330px;
}
.close__button{
width: 25px;
height: 25px;
}
}
@media (max-width: 400.98px) {
.form__button {
height: 30px;
position: absolute;
font-size: 15px;
grid-column-start: 3;
grid-column-end: 11;
margin-top: 100px;
}
.popup {
width: 350px;
height: 150px;
}
}
@media (max-width: 358.98px) {
input[type=text], select {
height: 30px;
padding: 12px 20px;
margin: 35px 0px;
}
.form__button {
height: 30px;
margin-top: 80px;
position: absolute;
font-size: 15px;
}
.popup {
width: 280px;
height: 150px;
}
.close__button{
width: 15px;
height: 15px;
}
}
@media (max-width: 300.98px) {
input[type=text], select {
width: 200px;
height: 30px;
padding: 12px 20px;
margin: 35px 0px;
font-size: 20px;
display: flex;
border: 1px solid #ccc;
box-sizing: border-box;
}
.form__button {
height: 30px;
margin-top: 80px;
position: absolute;
font-size: 15px;
}
.popup {
width: 240px;
height: 150px;
}
}
JS:
let elem = document.querySelector('.button');
function check() {
const popup = document.getElementsByClassName('popup');
if (document.getElementById('popup__input').checked = true) {
for (var i=0;i<popup.length;i+=1){
popup[i].style.display = 'block';
}
} else {
popup.style.display = "none";
}
}
const changePosition = () => {
let randX = Math.random();
let randY = Math.random();
const circleSize = {
width: elem.clientWidth,
heigth: elem.clientHeight
};
const windowWidth = window.innerWidth - circleSize.width;
const windowheigth = window.innerHeight - circleSize.heigth;
let randXMult = windowheigth * randX;
let randXP = randXMult + 'px';
let randYMult = windowWidth * randY;
let randYP = randYMult + 'px';
elem.style.top = randXP;
elem.style.left = randYP;
};
setInterval(changePosition, 1000);
|
[
"Try setting the type of button on the close icon as button and give the button an id of close__button so that we can select with javascript.\n<button id=\"close__button\" type=\"button\" class=\"close__button\">×</button>\nGrab the button and add a click event listener:\n \n function close() {\n const popup = document.getElementById('popup').style.display = \"none\";\n }```\n\n\n"
] |
[
0
] |
[
"To close a popup window without reloading the page in HTML, you can use the window.close() method. This method can be called from within the popup window to close itself. Here is an example of how you can use the window.close() method to close a popup window:\n<button onclick=\"window.close()\">Close Window</button>\n\nIn this example, the button element has an onclick attribute that calls the window.close() method when the button is clicked. This will cause the popup window to close without reloading the page.\nKeep in mind that the window.close() method can only be used to close a window that was opened using JavaScript, such as with the window.open() method. It cannot be used to close windows that were not opened using JavaScript. Additionally, some browsers may prevent the window.close() method from being used in certain situations for security reasons.\n"
] |
[
-1
] |
[
"css",
"html",
"javascript"
] |
stackoverflow_0074657022_css_html_javascript.txt
|
Q:
How to Finding USB port Address with dev/tty/usb.. format in raspberry pi 4 ver.b?
I had a problem for searching USB Address Port in Raspberry Pi. I'm using RIGOL DSE1102E Digital Oscilloscope, to acquiring data to my Raspberry Pi 4 Ver. b.
So, i'm connecting from Raspberry Pi 4 to my Oscilloscope USB Slave's port and i'm checking in my Raspberry terminal. So i'm typing
pi@raspberrypi:~$ lsusb
so, it returned
Bus 002 Device 001 : ID 1d6b Linux Foundation 3.0 root hub
Bus 001 Device 003: ID 1ab1:0588 Rigol Technologies DS1000 SERIES
Bus 001 Device 002: ID 2109:3431 VIA Labs, Inc. Hub
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
so i'm assumed my Raspberry is connected to my instrument because appearance of this line
Bus 001 Device 003: ID 1ab1:0588 Rigol Technologies DS1000 SERIES
so, based on this case how to know this
Bus 001 Device 003: ID 1ab1:0588 Rigol Technologies DS1000 SERIES
address in format
dev/tty/usb...
because i want to code it using pyos library from Python
A:
Instead of using /dev/ttyUSB0 I recommend using the symlinks provided by the kernel in /dev/serial/by-id. They contain a lot of info about the USB device, including the vendor ID and product ID, so you can be sure you are opening the right device. They also should be pretty stable, not depending on the USB port you use or the order the devices are plugged in. Run ls -l /dev/serial/by-id to explore the options.
|
How to Finding USB port Address with dev/tty/usb.. format in raspberry pi 4 ver.b?
|
I had a problem for searching USB Address Port in Raspberry Pi. I'm using RIGOL DSE1102E Digital Oscilloscope, to acquiring data to my Raspberry Pi 4 Ver. b.
So, i'm connecting from Raspberry Pi 4 to my Oscilloscope USB Slave's port and i'm checking in my Raspberry terminal. So i'm typing
pi@raspberrypi:~$ lsusb
so, it returned
Bus 002 Device 001 : ID 1d6b Linux Foundation 3.0 root hub
Bus 001 Device 003: ID 1ab1:0588 Rigol Technologies DS1000 SERIES
Bus 001 Device 002: ID 2109:3431 VIA Labs, Inc. Hub
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
so i'm assumed my Raspberry is connected to my instrument because appearance of this line
Bus 001 Device 003: ID 1ab1:0588 Rigol Technologies DS1000 SERIES
so, based on this case how to know this
Bus 001 Device 003: ID 1ab1:0588 Rigol Technologies DS1000 SERIES
address in format
dev/tty/usb...
because i want to code it using pyos library from Python
|
[
"Instead of using /dev/ttyUSB0 I recommend using the symlinks provided by the kernel in /dev/serial/by-id. They contain a lot of info about the USB device, including the vendor ID and product ID, so you can be sure you are opening the right device. They also should be pretty stable, not depending on the USB port you use or the order the devices are plugged in. Run ls -l /dev/serial/by-id to explore the options.\n"
] |
[
0
] |
[] |
[] |
[
"python",
"raspberry_pi4",
"usb"
] |
stackoverflow_0074623554_python_raspberry_pi4_usb.txt
|
Q:
How do I interchange two sets of elements in a string in python?
So, how can I interchange two sets of adjacent elements In a string.
Like lets take a string "abcd" I want to make it "cdab",another example would be "5089" I want to change this to "8950", The string is a large one and I want to apply the method throughout the string. Can you guys please suggest a way to do the same in python.
I tried modifying an existing algorithm for interchanging adjacent characters but it didn't work.
Thank You.
A:
Use the slice notation
def swap(value):
middle = len(value) // 2
return value[middle:] + value[:middle]
print(swap("abcd")) # cdab
print(swap("abcde")) # cdeab
print(swap("1234567890")) # 6789012345
A:
s = "abcdefghijklmn"
ans = []
n = 2
flip = True
for i in range(0, len(s), n):
if flip:
ind = i + n
else:
ind = i - n
flip = not flip
if len(s[ind:ind + n]) > 0:
ans.append(s[ind:ind + n])
else:
ans.append(s[i:])
"".join(ans)
# 'cdabghefklijmn'
OR
s = "abcdefghijklmn"
ans = list(s)
n = 2
for i in range(0, len(s), 2 * n):
ans[i + n : i + 2 * n], ans[i : i + n] = ans[i : i + n], ans[i + n : i + 2 * n]
"".join(ans)
# 'cdabghefklijmn'
|
How do I interchange two sets of elements in a string in python?
|
So, how can I interchange two sets of adjacent elements In a string.
Like lets take a string "abcd" I want to make it "cdab",another example would be "5089" I want to change this to "8950", The string is a large one and I want to apply the method throughout the string. Can you guys please suggest a way to do the same in python.
I tried modifying an existing algorithm for interchanging adjacent characters but it didn't work.
Thank You.
|
[
"Use the slice notation\ndef swap(value):\n middle = len(value) // 2\n return value[middle:] + value[:middle]\n\nprint(swap(\"abcd\")) # cdab\nprint(swap(\"abcde\")) # cdeab\nprint(swap(\"1234567890\")) # 6789012345\n\n",
"s = \"abcdefghijklmn\"\n\nans = []\nn = 2\nflip = True\nfor i in range(0, len(s), n):\n if flip:\n ind = i + n\n else:\n ind = i - n\n flip = not flip\n \n if len(s[ind:ind + n]) > 0:\n ans.append(s[ind:ind + n])\n else:\n ans.append(s[i:])\n\n\"\".join(ans)\n# 'cdabghefklijmn'\n\nOR\ns = \"abcdefghijklmn\"\n\nans = list(s)\nn = 2\n\nfor i in range(0, len(s), 2 * n):\n ans[i + n : i + 2 * n], ans[i : i + n] = ans[i : i + n], ans[i + n : i + 2 * n]\n\"\".join(ans)\n# 'cdabghefklijmn'\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"algorithm",
"python",
"string"
] |
stackoverflow_0074658493_algorithm_python_string.txt
|
Q:
How to resize react ace editor in react app
I am using React Ace {https://www.npmjs.com/package/react-ace} in my React app and I am showing the previews of user entered HTML, CSS and JavaScript below input like this:
Code Editor Preview
I want to make user able to adjust the height of editor and preview by dragging by at bottom of editor like CodePen.
I tried to resize with all of available questions in Stack Overflow but the main problem occurs that only the height of editor container increases but the lines of Ace editor remain same as before.
Moreover all of available answers of Stack Overflow are in static HTML page where they are using it like a variable
var editor = ace.edit( "smyles_editor" );
(like this):
Stack Overflow Question Crop
which I used in my React app like this:
My Code Preview
I rename the "ace" variable with "AceEditor" which I import in react from npm module, but it is giving me error like this:
Console Error Image
I want to make it resizable in a React app
A:
Depending on your exact method of resizing, Ace container may not get a resize event to cause it to re-adjust, but you can throw it yourself, like so:
var e = new CustomEvent("resize");
e.initEvent("resize");
window.dispatchEvent(e);
|
How to resize react ace editor in react app
|
I am using React Ace {https://www.npmjs.com/package/react-ace} in my React app and I am showing the previews of user entered HTML, CSS and JavaScript below input like this:
Code Editor Preview
I want to make user able to adjust the height of editor and preview by dragging by at bottom of editor like CodePen.
I tried to resize with all of available questions in Stack Overflow but the main problem occurs that only the height of editor container increases but the lines of Ace editor remain same as before.
Moreover all of available answers of Stack Overflow are in static HTML page where they are using it like a variable
var editor = ace.edit( "smyles_editor" );
(like this):
Stack Overflow Question Crop
which I used in my React app like this:
My Code Preview
I rename the "ace" variable with "AceEditor" which I import in react from npm module, but it is giving me error like this:
Console Error Image
I want to make it resizable in a React app
|
[
"Depending on your exact method of resizing, Ace container may not get a resize event to cause it to re-adjust, but you can throw it yourself, like so:\nvar e = new CustomEvent(\"resize\");\ne.initEvent(\"resize\");\nwindow.dispatchEvent(e);\n\n"
] |
[
0
] |
[] |
[] |
[
"ace_editor",
"css",
"html",
"javascript",
"reactjs"
] |
stackoverflow_0074631017_ace_editor_css_html_javascript_reactjs.txt
|
Q:
Table is not scrollable
I coded a table and this table has so much information in it that if the page is on 100% more than half of the table is missing.
I hope you can help me. You have to add more of the table cells to recreate it.
body {
min-height: 100vh;
background-color: var(--body-color);
transition: var(--tran-05);
background-color: #18191a;
--body-color: #18191a;
--sidebar-color: #242526;
--primary-color: #3a3b3c;
--primary-color-light: #3a3b3c;
--toggle-color: #fff;
--text-color: #ccc;
}
table.content {
border-collapse: collapse;
margin: auto;
min-width: 400px;
border-radius: 5px 5px 0 0;
justify-content: center;
width: 75%;
padding: 2%;
position: absolute;
left: 50%;
top: 50%;
transform: translate(-50%, -50%);
color: #201f1b;
}
table.content thead tr {
background-color: #403f46;
color: white;
font-weight: bold;
text-align: left;
}
table.content th,
table.content td {
padding: 12px 16px;
}
table.content tbody tr {
border-bottom: 1px solid #ccc;
}
table.content tbody tr:last-of-type {
border-bottom: 2px solid #403f46;
}
table.content tbody tr.active {
font-weight: bold;
color: #403f46;
}
<table class="content">
<thead>
<tr>
<th>ServerID</th>
<th>Server Owner</th>
<th>Premium Server</th>
<th>Dev Server</th>
</tr>
</thead>
<tbody>
<tr>
<td style='color: white'><b>Test</b></td>
<td style='color: white'><b>Test</b></td>
<td style='color: white'><b>Test</b></td>
<td style='color: white'><b>Test</b></td>
</tr>
</tbody>
</table>
A:
The problem is because of this:
left: 50%;
top: 50%;
transform: translate(-50%, -50%);
it calculates -50% on the y-axis, which results in a cut. This works fine, but not for long elements.
Instead of this, just using a div as a wrapper around the table.
Update your code like the following:
Html:
<div class="wrapper">
<table class="content">
<thead>
<tr>
<th>ServerID</th>
<th>Server Owner</th>
<th>Premium Server</th>
<th>Dev Server</th>
</tr>
</thead>
<tbody>
<tr>
<td style='color: white'><b>Test</b></td>
<td style='color: white'><b>Test</b></td>
<td style='color: white'><b>Test</b></td>
<td style='color: white'><b>Test</b></td>
</tr>
</tbody>
</table>
</div>
css:
div.wrapper {
display: flex;
justify-content: center;
align-items: center;
min-height: 100vh;
}
table.content {
border-collapse: collapse;
margin: auto;
min-width: 400px;
border-radius: 5px 5px 0 0;
width: 75%;
padding: 2%;
color: #f9f9f9;
}
now it should work correctly.
|
Table is not scrollable
|
I coded a table and this table has so much information in it that if the page is on 100% more than half of the table is missing.
I hope you can help me. You have to add more of the table cells to recreate it.
body {
min-height: 100vh;
background-color: var(--body-color);
transition: var(--tran-05);
background-color: #18191a;
--body-color: #18191a;
--sidebar-color: #242526;
--primary-color: #3a3b3c;
--primary-color-light: #3a3b3c;
--toggle-color: #fff;
--text-color: #ccc;
}
table.content {
border-collapse: collapse;
margin: auto;
min-width: 400px;
border-radius: 5px 5px 0 0;
justify-content: center;
width: 75%;
padding: 2%;
position: absolute;
left: 50%;
top: 50%;
transform: translate(-50%, -50%);
color: #201f1b;
}
table.content thead tr {
background-color: #403f46;
color: white;
font-weight: bold;
text-align: left;
}
table.content th,
table.content td {
padding: 12px 16px;
}
table.content tbody tr {
border-bottom: 1px solid #ccc;
}
table.content tbody tr:last-of-type {
border-bottom: 2px solid #403f46;
}
table.content tbody tr.active {
font-weight: bold;
color: #403f46;
}
<table class="content">
<thead>
<tr>
<th>ServerID</th>
<th>Server Owner</th>
<th>Premium Server</th>
<th>Dev Server</th>
</tr>
</thead>
<tbody>
<tr>
<td style='color: white'><b>Test</b></td>
<td style='color: white'><b>Test</b></td>
<td style='color: white'><b>Test</b></td>
<td style='color: white'><b>Test</b></td>
</tr>
</tbody>
</table>
|
[
"The problem is because of this:\nleft: 50%;\ntop: 50%;\ntransform: translate(-50%, -50%);\n\nit calculates -50% on the y-axis, which results in a cut. This works fine, but not for long elements.\nInstead of this, just using a div as a wrapper around the table.\nUpdate your code like the following:\nHtml:\n<div class=\"wrapper\">\n <table class=\"content\">\n <thead>\n <tr>\n <th>ServerID</th>\n <th>Server Owner</th>\n <th>Premium Server</th>\n <th>Dev Server</th>\n </tr>\n </thead>\n <tbody>\n <tr>\n <td style='color: white'><b>Test</b></td>\n <td style='color: white'><b>Test</b></td>\n <td style='color: white'><b>Test</b></td>\n <td style='color: white'><b>Test</b></td>\n </tr>\n </tbody>\n </table>\n</div>\n\ncss:\ndiv.wrapper {\n display: flex;\n justify-content: center;\n align-items: center;\n min-height: 100vh;\n }\n table.content {\n border-collapse: collapse;\n margin: auto;\n min-width: 400px;\n border-radius: 5px 5px 0 0;\n width: 75%;\n padding: 2%;\n color: #f9f9f9;\n }\n\nnow it should work correctly.\n"
] |
[
0
] |
[] |
[] |
[
"css",
"html",
"scrollable"
] |
stackoverflow_0074658154_css_html_scrollable.txt
|
Q:
FileNotFoundError: [Errno 2] No such file or directory: './iris.csv'
I'm getting this error for my Python code using the IDLE Shell and I'm not sure how to resolve it. I've tried downloading and adding the iris.csv file into the same place as the following .py file but it just gives me another set of errors like shown. If someone could help me it would be greatly appreciated!
-------------------------------------------------------------------------------------------------------------------------------
Traceback (most recent call last):
File "C:\Users\kyle_\OneDrive\Documents\COIS 4400H\Lab 5.py", line 17, in <module>
iris_df = pd.read_csv('./iris.csv')
File "C:\Users\kyle_\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\pandas\util\_decorators.py", line 211, in wrapper
return func(*args, **kwargs)
File "C:\Users\kyle_\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\pandas\util\_decorators.py", line 317, in wrapper
return func(*args, **kwargs)
File "C:\Users\kyle_\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\pandas\io\parsers\readers.py", line 950, in read_csv
return _read(filepath_or_buffer, kwds)
File "C:\Users\kyle_\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\pandas\io\parsers\readers.py", line 605, in _read
parser = TextFileReader(filepath_or_buffer, **kwds)
File "C:\Users\kyle_\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\pandas\io\parsers\readers.py", line 1442, in __init__
self._engine = self._make_engine(f, self.engine)
File "C:\Users\kyle_\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\pandas\io\parsers\readers.py", line 1729, in _make_engine
self.handles = get_handle(
File "C:\Users\kyle_\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\pandas\io\common.py", line 857, in get_handle
handle = open(
FileNotFoundError: [Errno 2] No such file or directory: './iris.csv'
This is the code that I'm using:
-------------------------------------------------------------------------------------------------------------------------------
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import datetime as dt
import sklearn
from sklearn.preprocessing import StandardScaler
from sklearn.cluster import KMeans
from sklearn.metrics import silhouette_score
from scipy.cluster.hierarchy import linkage
from scipy.cluster.hierarchy import dendrogram
from scipy.cluster.hierarchy import cut_tree
iris_df = pd.read_csv('./iris.csv')
iris_df.head()
iris_df['iris'].drop_duplicates()
iris_df = iris_df.drop('iris',axis=1)
scaler = StandardScaler()
iris_df_scaled = scaler.fit_transform(iris_df)
iris_df_scaled.shape
sse = []
range_n_clusters = [2, 3, 4, 5, 6, 7, 8 , 9 , 10 ]
for num_clusters in range_n_clusters:
kmeans = KMeans(n_clusters=num_clusters, max_iter=50)
kmeans.fit(iris_df_scaled)
sse.append(kmeans.inertia_)
plt.plot(sse)
# 1. Kmeans with k=3
kmeans = KMeans(n_clusters=3, max_iter=50)
y = kmeans.fit_predict(iris_df_scaled)
y
iris_df['Label'] = kmeans.labels_
iris_df.head()
plt.scatter(iris_df_scaled[y == 0, 0], iris_df_scaled[y == 0, 1], s = 100, c = 'purple', label = 'Iris-setosa')
plt.scatter(iris_df_scaled[y == 1, 0], iris_df_scaled[y == 1, 1], s = 100, c = 'orange', label = 'Iris-versicolour')
plt.scatter(iris_df_scaled[y == 2, 0], iris_df_scaled[y == 2, 1], s = 100, c = 'green', label = 'Iris-virginica')
# 2. Hierarchical clustering
plt.figure(figsize=(15, 5))
mergings = linkage(iris_df_scaled,method='complete',metric='euclidean')
dendrogram(mergings)
plt.show()
cluster_hier = cut_tree(mergings,n_clusters=3).reshape(-1)
iris_df['Label'] = cluster_hier
iris_df.head()
plt.scatter(iris_df_scaled[cluster_hier == 0, 0], iris_df_scaled[cluster_hier == 0, 1], s = 100, c = 'purple', label = 'Iris-setosa')
plt.scatter(iris_df_scaled[cluster_hier == 1, 0], iris_df_scaled[cluster_hier == 1, 1], s = 100, c = 'orange', label = 'Iris-versicolour')
plt.scatter(iris_df_scaled[cluster_hier == 2, 0], iris_df_scaled[cluster_hier == 2, 1], s = 100, c = 'green', label = 'Iris-virginica')
A:
Looks like you are looking in the file path that ends with \Lab 5 .py, which will just contain your python script. So you need to look one layer "above" your python script, which is the directory containing the script and the iris.csv-file.
Try: iris_df = pd.read_csv('../iris.csv')
|
FileNotFoundError: [Errno 2] No such file or directory: './iris.csv'
|
I'm getting this error for my Python code using the IDLE Shell and I'm not sure how to resolve it. I've tried downloading and adding the iris.csv file into the same place as the following .py file but it just gives me another set of errors like shown. If someone could help me it would be greatly appreciated!
-------------------------------------------------------------------------------------------------------------------------------
Traceback (most recent call last):
File "C:\Users\kyle_\OneDrive\Documents\COIS 4400H\Lab 5.py", line 17, in <module>
iris_df = pd.read_csv('./iris.csv')
File "C:\Users\kyle_\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\pandas\util\_decorators.py", line 211, in wrapper
return func(*args, **kwargs)
File "C:\Users\kyle_\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\pandas\util\_decorators.py", line 317, in wrapper
return func(*args, **kwargs)
File "C:\Users\kyle_\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\pandas\io\parsers\readers.py", line 950, in read_csv
return _read(filepath_or_buffer, kwds)
File "C:\Users\kyle_\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\pandas\io\parsers\readers.py", line 605, in _read
parser = TextFileReader(filepath_or_buffer, **kwds)
File "C:\Users\kyle_\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\pandas\io\parsers\readers.py", line 1442, in __init__
self._engine = self._make_engine(f, self.engine)
File "C:\Users\kyle_\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\pandas\io\parsers\readers.py", line 1729, in _make_engine
self.handles = get_handle(
File "C:\Users\kyle_\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\pandas\io\common.py", line 857, in get_handle
handle = open(
FileNotFoundError: [Errno 2] No such file or directory: './iris.csv'
This is the code that I'm using:
-------------------------------------------------------------------------------------------------------------------------------
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import datetime as dt
import sklearn
from sklearn.preprocessing import StandardScaler
from sklearn.cluster import KMeans
from sklearn.metrics import silhouette_score
from scipy.cluster.hierarchy import linkage
from scipy.cluster.hierarchy import dendrogram
from scipy.cluster.hierarchy import cut_tree
iris_df = pd.read_csv('./iris.csv')
iris_df.head()
iris_df['iris'].drop_duplicates()
iris_df = iris_df.drop('iris',axis=1)
scaler = StandardScaler()
iris_df_scaled = scaler.fit_transform(iris_df)
iris_df_scaled.shape
sse = []
range_n_clusters = [2, 3, 4, 5, 6, 7, 8 , 9 , 10 ]
for num_clusters in range_n_clusters:
kmeans = KMeans(n_clusters=num_clusters, max_iter=50)
kmeans.fit(iris_df_scaled)
sse.append(kmeans.inertia_)
plt.plot(sse)
# 1. Kmeans with k=3
kmeans = KMeans(n_clusters=3, max_iter=50)
y = kmeans.fit_predict(iris_df_scaled)
y
iris_df['Label'] = kmeans.labels_
iris_df.head()
plt.scatter(iris_df_scaled[y == 0, 0], iris_df_scaled[y == 0, 1], s = 100, c = 'purple', label = 'Iris-setosa')
plt.scatter(iris_df_scaled[y == 1, 0], iris_df_scaled[y == 1, 1], s = 100, c = 'orange', label = 'Iris-versicolour')
plt.scatter(iris_df_scaled[y == 2, 0], iris_df_scaled[y == 2, 1], s = 100, c = 'green', label = 'Iris-virginica')
# 2. Hierarchical clustering
plt.figure(figsize=(15, 5))
mergings = linkage(iris_df_scaled,method='complete',metric='euclidean')
dendrogram(mergings)
plt.show()
cluster_hier = cut_tree(mergings,n_clusters=3).reshape(-1)
iris_df['Label'] = cluster_hier
iris_df.head()
plt.scatter(iris_df_scaled[cluster_hier == 0, 0], iris_df_scaled[cluster_hier == 0, 1], s = 100, c = 'purple', label = 'Iris-setosa')
plt.scatter(iris_df_scaled[cluster_hier == 1, 0], iris_df_scaled[cluster_hier == 1, 1], s = 100, c = 'orange', label = 'Iris-versicolour')
plt.scatter(iris_df_scaled[cluster_hier == 2, 0], iris_df_scaled[cluster_hier == 2, 1], s = 100, c = 'green', label = 'Iris-virginica')
|
[
"Looks like you are looking in the file path that ends with \\Lab 5 .py, which will just contain your python script. So you need to look one layer \"above\" your python script, which is the directory containing the script and the iris.csv-file.\nTry: iris_df = pd.read_csv('../iris.csv')\n"
] |
[
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074658611_python.txt
|
Q:
Validation python, Using GUI
I am attempting to validate the text box field so that the user can only insert integers, although i have used a while loop to attempt and cannot figure it out I keep getting errors. Please help.
from tkinter import *
import tkinter as tk
from tkinter.tix import *
# setup the UI
root = Tk()
# Give the UI a title
root.title("Distance converter Miles to Kilometers")
# set window geometry
root.geometry("480x130")
# setup the buttons
valRadio = tk.IntVar()
myText=tk.StringVar()
e1 =tk.IntVar()
def calculate(*arg):
while True:
try:
if valRadio.get() == 1:
# get the miles ( Calculation )
res = round(float(e1.get()) / 1.6093,2)
# set the result text
myText.set( "Your input converts to " + str(res) + " Miles")
break
if valRadio.get() == 2:
# get the kilometeres
res = round(float(e1.get()) * 1.6093,2)
# set the result text
myText.set( "Your input converts to " + str(res) + " Kilometers")
break
if ValueError:
myText.set ("Please check selections, only Integers are allowed")
break
else:
# print error message
res = round(float(e1.get()) / 1.6093,2)
myText.set ("Please check selections, a field cannot be empty")
break
except ValueError:
myText.set ("Please check selections, a field cannot be empty")
break
# Set the label for Instructions and how to use the calculator
instructions = Label(root, text="""Hover me:""")
instructions.grid(row=0, column=1)
# set the label to determine the distance field
conversion = tk.Label( text=" Value to be converted :" )
conversion.grid(row=1,column = 0,)
# set the entry box to enable the user to input their distance
tk.Entry(textvariable = e1).grid(row=1, column=1)
#set the label to determine the result of the program and output the users results below it
tk.Label(text = "Result:").grid(row=5,column = 0)
result = tk.Label(text="(result)", textvariable=myText)
result.grid(row=5,column=1)
# the radio button control for Miles
r1 = tk.Radiobutton(text="Miles",
variable=valRadio, value=1).grid(row=3, column=0)
# the radio button control for Kilometers
r2 = tk.Radiobutton(text="Kilometers",
variable=valRadio, value=2).grid(row=3, column=2)
# enable a calculate button and decide what it will do as well as wher on the grid it belongs
calculate_button = tk.Button(text="Calculate \n (Enter)", command=calculate)
calculate_button.grid(row=6, column=2)
# deploy the UI
root.mainloop()
I have attempted to use the While loop inside the code although I can only get it to where if the user inputs text and doesn't select a radio button the error will display but I would like to have it where the text box in general will not allow anything but integers and if it receives string print the error as it does if the radio buttons aren't selected.
A:
define validation type and validatecommand. validate = key makes with every key input it runs validatecommand. It only types if that function returns true which is 'validate' function in this case.
vcmd = (root.register(validate), '%P')
tk.Entry(textvariable = e1,validate="key", validatecommand=vcmd).grid(row=1, column=1)
this is the validation function
def validate(input):
if not input:
return True
elif re.fullmatch(r'[0-9]*',input):
return True
myText.set("Please check selections, only Integers are allowed")
return False
it return true only when its full of numbers([0-9]* is an regular expression which defines all numbers) or empty. If it contains any letter it return False any it denied this way.
Also do not forget to imports
import re
|
Validation python, Using GUI
|
I am attempting to validate the text box field so that the user can only insert integers, although i have used a while loop to attempt and cannot figure it out I keep getting errors. Please help.
from tkinter import *
import tkinter as tk
from tkinter.tix import *
# setup the UI
root = Tk()
# Give the UI a title
root.title("Distance converter Miles to Kilometers")
# set window geometry
root.geometry("480x130")
# setup the buttons
valRadio = tk.IntVar()
myText=tk.StringVar()
e1 =tk.IntVar()
def calculate(*arg):
while True:
try:
if valRadio.get() == 1:
# get the miles ( Calculation )
res = round(float(e1.get()) / 1.6093,2)
# set the result text
myText.set( "Your input converts to " + str(res) + " Miles")
break
if valRadio.get() == 2:
# get the kilometeres
res = round(float(e1.get()) * 1.6093,2)
# set the result text
myText.set( "Your input converts to " + str(res) + " Kilometers")
break
if ValueError:
myText.set ("Please check selections, only Integers are allowed")
break
else:
# print error message
res = round(float(e1.get()) / 1.6093,2)
myText.set ("Please check selections, a field cannot be empty")
break
except ValueError:
myText.set ("Please check selections, a field cannot be empty")
break
# Set the label for Instructions and how to use the calculator
instructions = Label(root, text="""Hover me:""")
instructions.grid(row=0, column=1)
# set the label to determine the distance field
conversion = tk.Label( text=" Value to be converted :" )
conversion.grid(row=1,column = 0,)
# set the entry box to enable the user to input their distance
tk.Entry(textvariable = e1).grid(row=1, column=1)
#set the label to determine the result of the program and output the users results below it
tk.Label(text = "Result:").grid(row=5,column = 0)
result = tk.Label(text="(result)", textvariable=myText)
result.grid(row=5,column=1)
# the radio button control for Miles
r1 = tk.Radiobutton(text="Miles",
variable=valRadio, value=1).grid(row=3, column=0)
# the radio button control for Kilometers
r2 = tk.Radiobutton(text="Kilometers",
variable=valRadio, value=2).grid(row=3, column=2)
# enable a calculate button and decide what it will do as well as wher on the grid it belongs
calculate_button = tk.Button(text="Calculate \n (Enter)", command=calculate)
calculate_button.grid(row=6, column=2)
# deploy the UI
root.mainloop()
I have attempted to use the While loop inside the code although I can only get it to where if the user inputs text and doesn't select a radio button the error will display but I would like to have it where the text box in general will not allow anything but integers and if it receives string print the error as it does if the radio buttons aren't selected.
|
[
"define validation type and validatecommand. validate = key makes with every key input it runs validatecommand. It only types if that function returns true which is 'validate' function in this case.\nvcmd = (root.register(validate), '%P')\ntk.Entry(textvariable = e1,validate=\"key\", validatecommand=vcmd).grid(row=1, column=1)\n\nthis is the validation function\ndef validate(input):\n if not input:\n return True\n elif re.fullmatch(r'[0-9]*',input):\n return True\n myText.set(\"Please check selections, only Integers are allowed\")\n return False\n\nit return true only when its full of numbers([0-9]* is an regular expression which defines all numbers) or empty. If it contains any letter it return False any it denied this way.\nAlso do not forget to imports\nimport re\n\n"
] |
[
0
] |
[] |
[] |
[
"interface",
"python",
"tkinter"
] |
stackoverflow_0074650750_interface_python_tkinter.txt
|
Q:
How can i fix the error when starting cqlsh
it's my first time using Cassandra and when I try to run "CQLSH" command I get error like this.
C:\Users\RanggaSaputra>cqlsh
File "C:\apache-cassandra-3.11.14\bin\cqlsh.py", line 146
except ImportError, e:
^^^^^^^^^^^^^^
my device is windows 11
I've tried following the advice from YouTube trying to change the start_rpc and enable_user variables to true.
A:
What version of python do you use? I believe you'll want version 2.7 for Cassandra 3.11 as specified here:
https://cassandra.apache.org/doc/3.11/cassandra/getting_started/installing.html
Not having the proper version is likely the issue.
|
How can i fix the error when starting cqlsh
|
it's my first time using Cassandra and when I try to run "CQLSH" command I get error like this.
C:\Users\RanggaSaputra>cqlsh
File "C:\apache-cassandra-3.11.14\bin\cqlsh.py", line 146
except ImportError, e:
^^^^^^^^^^^^^^
my device is windows 11
I've tried following the advice from YouTube trying to change the start_rpc and enable_user variables to true.
|
[
"What version of python do you use? I believe you'll want version 2.7 for Cassandra 3.11 as specified here:\nhttps://cassandra.apache.org/doc/3.11/cassandra/getting_started/installing.html\nNot having the proper version is likely the issue.\n"
] |
[
1
] |
[] |
[] |
[
"cassandra",
"cqlsh"
] |
stackoverflow_0074657418_cassandra_cqlsh.txt
|
Q:
Why is writing beans to CSV not working after OpenCSV upgrade from 4.1 to 5.7.1
I'm using OpenCSV to write Java beans to CSV file. Here's the code snippet:
public void generateCSVFile(List<?> domains, String[] columns, String fileName) {
try {
final FileWriter writer = new FileWriter(fileName);
CSVWriter csvWriter = new CSVWriter(writer);
csvWriter.writeNext(columns);
if (CollectionUtils.isNotEmpty(domains)) {
ColumnPositionMappingStrategy mappingStrategy = new ColumnPositionMappingStrategy();
mappingStrategy.setType(Class.forName(domains.get(0).getClass().getTypeName()));
mappingStrategy.setColumnMapping(columns);
StatefulBeanToCsvBuilder<?> builder = new StatefulBeanToCsvBuilder(
csvWriter);
StatefulBeanToCsv beanWriter = builder.withMappingStrategy(mappingStrategy).build();
beanWriter.write(domains);
csvWriter.close();
writer.close();
}
}
catch(Exception e){
LOG.error("Exception occured while generating CSV file : {}", e)
}
}
This was working fine with opencsv version 4.1. This has generated a CSV file with headers and columns in the order I've passed the columns(String[]). Recently, I've updated the version to 5.7.1, after which it's generating a CSV file only with header and not the data. It generates an empty file, with only header present in the generated file.
I've tried using HeaderColumnNameMappingStrategy. It's generating the file with data, but in the ascending order of the attributes and with attributes in Uppercase, as it's the default behavior.
Is there a way to get ColumnPositionMappingStrategy work with version 5.7.1, like it worked in version 4.1?
A:
This approach will work with both OpenCSV versions, 4.1 and 5.7:
Serialization
public static <T> void writeToCSV(String location, Class<T> type, List<T> records, String[] columns)
throws IOException, CsvRequiredFieldEmptyException, CsvDataTypeMismatchException {
ColumnPositionMappingStrategy<T> mappingStrategy = new ColumnPositionMappingStrategy<>();
mappingStrategy.setType(type);
mappingStrategy.setColumnMapping(columns);
try (Writer writer = new FileWriter(location)) {
StatefulBeanToCsv<T> beanToCsv = new StatefulBeanToCsvBuilder<T>(writer)
.withMappingStrategy(mappingStrategy)
.withQuotechar(CSVWriter.NO_QUOTE_CHARACTER)
.build();
beanToCsv.write(records);
}
}
Usage example
String[] columns = new String[]{"a", "b"};
List<Bean> objects = List.of(new Bean("A1", "B1"),
new Bean("A2", "B2"));
String location = "beans.csv";
try {
CSVUtils.writeToCSV(location, Bean.class, objects, columns);
} catch (IOException | CsvRequiredFieldEmptyException | CsvDataTypeMismatchException e) {
e.printStackTrace();
}
Given that the class Bean has two String attributes a and b.
I tested the above with Java 8, 11 and 17.
|
Why is writing beans to CSV not working after OpenCSV upgrade from 4.1 to 5.7.1
|
I'm using OpenCSV to write Java beans to CSV file. Here's the code snippet:
public void generateCSVFile(List<?> domains, String[] columns, String fileName) {
try {
final FileWriter writer = new FileWriter(fileName);
CSVWriter csvWriter = new CSVWriter(writer);
csvWriter.writeNext(columns);
if (CollectionUtils.isNotEmpty(domains)) {
ColumnPositionMappingStrategy mappingStrategy = new ColumnPositionMappingStrategy();
mappingStrategy.setType(Class.forName(domains.get(0).getClass().getTypeName()));
mappingStrategy.setColumnMapping(columns);
StatefulBeanToCsvBuilder<?> builder = new StatefulBeanToCsvBuilder(
csvWriter);
StatefulBeanToCsv beanWriter = builder.withMappingStrategy(mappingStrategy).build();
beanWriter.write(domains);
csvWriter.close();
writer.close();
}
}
catch(Exception e){
LOG.error("Exception occured while generating CSV file : {}", e)
}
}
This was working fine with opencsv version 4.1. This has generated a CSV file with headers and columns in the order I've passed the columns(String[]). Recently, I've updated the version to 5.7.1, after which it's generating a CSV file only with header and not the data. It generates an empty file, with only header present in the generated file.
I've tried using HeaderColumnNameMappingStrategy. It's generating the file with data, but in the ascending order of the attributes and with attributes in Uppercase, as it's the default behavior.
Is there a way to get ColumnPositionMappingStrategy work with version 5.7.1, like it worked in version 4.1?
|
[
"This approach will work with both OpenCSV versions, 4.1 and 5.7:\nSerialization\npublic static <T> void writeToCSV(String location, Class<T> type, List<T> records, String[] columns)\n throws IOException, CsvRequiredFieldEmptyException, CsvDataTypeMismatchException {\n\n ColumnPositionMappingStrategy<T> mappingStrategy = new ColumnPositionMappingStrategy<>();\n mappingStrategy.setType(type);\n mappingStrategy.setColumnMapping(columns);\n try (Writer writer = new FileWriter(location)) {\n StatefulBeanToCsv<T> beanToCsv = new StatefulBeanToCsvBuilder<T>(writer)\n .withMappingStrategy(mappingStrategy)\n .withQuotechar(CSVWriter.NO_QUOTE_CHARACTER)\n .build();\n beanToCsv.write(records);\n }\n}\n\nUsage example\nString[] columns = new String[]{\"a\", \"b\"};\nList<Bean> objects = List.of(new Bean(\"A1\", \"B1\"),\n new Bean(\"A2\", \"B2\"));\nString location = \"beans.csv\";\n\ntry {\n CSVUtils.writeToCSV(location, Bean.class, objects, columns);\n} catch (IOException | CsvRequiredFieldEmptyException | CsvDataTypeMismatchException e) {\n e.printStackTrace();\n}\n\nGiven that the class Bean has two String attributes a and b.\nI tested the above with Java 8, 11 and 17.\n"
] |
[
0
] |
[] |
[] |
[
"csv",
"java",
"opencsv",
"serialization"
] |
stackoverflow_0074379791_csv_java_opencsv_serialization.txt
|
Q:
How to create a drill down graph using apache superset?
Is it possible to create a drill down graph with apache superset?
Say for example - population of all countries and onclick of a country, population of all states within that country should be drawn and onclick of state, population of state should be drawn.
Can someone help me with steps/tips to create this using apache superset as I did not find any example/option to create the same.
A:
Please see the response of mistercrunch (one of the creators of Apache Superset) below or here: https://github.com/apache/incubator-superset/issues/2890.
Drill down assumes the framework is aware of hierarchies which Superset isn't at the moment. We encourage our users to slice and dice by entering the explore mode, applying filters and altering the "Group By" field which is pretty easy and very flexible. It's an open field instead of a guided flow.
A:
There is a walkthrough on this from ApacheCon Asia 2022 on youtube - https://www.youtube.com/watch?v=7YnpKLZ1PRM
More than I can summarize here for you
|
How to create a drill down graph using apache superset?
|
Is it possible to create a drill down graph with apache superset?
Say for example - population of all countries and onclick of a country, population of all states within that country should be drawn and onclick of state, population of state should be drawn.
Can someone help me with steps/tips to create this using apache superset as I did not find any example/option to create the same.
|
[
"Please see the response of mistercrunch (one of the creators of Apache Superset) below or here: https://github.com/apache/incubator-superset/issues/2890.\n\nDrill down assumes the framework is aware of hierarchies which Superset isn't at the moment. We encourage our users to slice and dice by entering the explore mode, applying filters and altering the \"Group By\" field which is pretty easy and very flexible. It's an open field instead of a guided flow.\n\n",
"There is a walkthrough on this from ApacheCon Asia 2022 on youtube - https://www.youtube.com/watch?v=7YnpKLZ1PRM\nMore than I can summarize here for you\n"
] |
[
1,
0
] |
[
"It is possible by using custom JavaScript and charts.\n"
] |
[
-7
] |
[
"apache_superset"
] |
stackoverflow_0054801314_apache_superset.txt
|
Q:
how to fetch data from an api and display it as raw data on the front end?
Okay so its been long i did react. my problem is very easy, i just dont know how to do it. Basically i am fetching data from an api and putting it inside a state. i basically want to display that data im fetching as raw data instead of mapping over it. this is what i mean.
This is my component:
const App = () => {
const [info, setInfo] = useState([])
const getData = async () => {
const res = await fetch ('https://dummyjson.com/products/')
const data = await res.json()
setInfo(data.products)
}
console.log(info)
return(
<div>
{info}
<button onClick={getData}>click me</button>
</div>
)
}
export default App;
Basically when i click the button, i want the info to be displayed like this on the browser:
{
"id": 1,
"title": "iPhone 9",
"description": "An apple mobile which is nothing like apple",
"price": 549,
"discountPercentage": 12.96,
"rating": 4.69,
"stock": 94,
"brand": "Apple",
"category": "smartphones",
"thumbnail": "https://i.dummyjson.com/data/products/1/thumbnail.jpg",
"images": [
"https://i.dummyjson.com/data/products/1/1.jpg",
"https://i.dummyjson.com/data/products/1/2.jpg",
"https://i.dummyjson.com/data/products/1/3.jpg",
"https://i.dummyjson.com/data/products/1/4.jpg",
"https://i.dummyjson.com/data/products/1/thumbnail.jpg"
]
}
That is all. i just want to display the raw json data on the front end. but as my code is now, everytime i click the button i get this error:
Objects are not valid as a React child (found: object with keys {id, title, description, price, discountPercentage, rating, stock, brand, category, thumbnail, images}). If you meant to render a collection of children, use an array instead
A:
Stringify your JSON data <pre>{JSON.stringify(data)}</pre> or Check out this thread with how to Pretty Printing JSON with React
A:
The issue is JSON is not valid to display directly. That's what the error says:
Objects are not valid as a React child (found: object with keys {id, title, description, price, discountPercentage, rating, stock, brand, category, thumbnail, images}). If you meant to render a collection of children, use an array instead
You can stringify the info to view it as a whole:
const App = () => {
const [info, setInfo] = React.useState([]);
const getData = () => {
fetch("https://dummyjson.com/products/")
.then((res) => res.json())
.then((data) => {
setInfo(data.products);
});
};
return (
<div>
{JSON.stringify(info)}
<button onClick={getData}>click me</button>
</div>
);
};
ReactDOM.render(<App />, document.querySelector('.react'));
<script crossorigin src="https://unpkg.com/react@16/umd/react.development.js"></script>
<script crossorigin src="https://unpkg.com/react-dom@16/umd/react-dom.development.js"></script>
<div class='react'></div>
A:
You can use JSON.stringify to stringify the value and use pre tag to prettify it. as follows:
const App = () => {
const [info, setInfo] = useState([])
const getData = async () => {
const res = await fetch ('https://dummyjson.com/products/')
const data = await res.json()
setInfo(data.products)
}
console.log(info)
return (
<div>
<pre>{JSON.stringify(info)}</pre>
<button onClick={getData}>click me</button>
</div>
)
}
export default App;
|
how to fetch data from an api and display it as raw data on the front end?
|
Okay so its been long i did react. my problem is very easy, i just dont know how to do it. Basically i am fetching data from an api and putting it inside a state. i basically want to display that data im fetching as raw data instead of mapping over it. this is what i mean.
This is my component:
const App = () => {
const [info, setInfo] = useState([])
const getData = async () => {
const res = await fetch ('https://dummyjson.com/products/')
const data = await res.json()
setInfo(data.products)
}
console.log(info)
return(
<div>
{info}
<button onClick={getData}>click me</button>
</div>
)
}
export default App;
Basically when i click the button, i want the info to be displayed like this on the browser:
{
"id": 1,
"title": "iPhone 9",
"description": "An apple mobile which is nothing like apple",
"price": 549,
"discountPercentage": 12.96,
"rating": 4.69,
"stock": 94,
"brand": "Apple",
"category": "smartphones",
"thumbnail": "https://i.dummyjson.com/data/products/1/thumbnail.jpg",
"images": [
"https://i.dummyjson.com/data/products/1/1.jpg",
"https://i.dummyjson.com/data/products/1/2.jpg",
"https://i.dummyjson.com/data/products/1/3.jpg",
"https://i.dummyjson.com/data/products/1/4.jpg",
"https://i.dummyjson.com/data/products/1/thumbnail.jpg"
]
}
That is all. i just want to display the raw json data on the front end. but as my code is now, everytime i click the button i get this error:
Objects are not valid as a React child (found: object with keys {id, title, description, price, discountPercentage, rating, stock, brand, category, thumbnail, images}). If you meant to render a collection of children, use an array instead
|
[
"Stringify your JSON data <pre>{JSON.stringify(data)}</pre> or Check out this thread with how to Pretty Printing JSON with React\n",
"The issue is JSON is not valid to display directly. That's what the error says:\nObjects are not valid as a React child (found: object with keys {id, title, description, price, discountPercentage, rating, stock, brand, category, thumbnail, images}). If you meant to render a collection of children, use an array instead\n\nYou can stringify the info to view it as a whole:\n\n\nconst App = () => {\n const [info, setInfo] = React.useState([]);\n\n const getData = () => {\n fetch(\"https://dummyjson.com/products/\")\n .then((res) => res.json())\n .then((data) => {\n setInfo(data.products);\n });\n };\n\n\n return (\n <div>\n {JSON.stringify(info)}\n <button onClick={getData}>click me</button>\n </div>\n );\n};\n\nReactDOM.render(<App />, document.querySelector('.react'));\n<script crossorigin src=\"https://unpkg.com/react@16/umd/react.development.js\"></script>\n<script crossorigin src=\"https://unpkg.com/react-dom@16/umd/react-dom.development.js\"></script>\n<div class='react'></div>\n\n\n\n",
"You can use JSON.stringify to stringify the value and use pre tag to prettify it. as follows:\nconst App = () => {\nconst [info, setInfo] = useState([])\n\nconst getData = async () => {\n const res = await fetch ('https://dummyjson.com/products/')\n const data = await res.json()\n setInfo(data.products)\n }\nconsole.log(info)\n \n\nreturn (\n <div>\n <pre>{JSON.stringify(info)}</pre>\n <button onClick={getData}>click me</button>\n </div>\n )\n}\n\nexport default App;\n\n\n\n"
] |
[
2,
0,
0
] |
[] |
[] |
[
"arrays",
"fetch",
"object",
"react_hooks",
"reactjs"
] |
stackoverflow_0074658562_arrays_fetch_object_react_hooks_reactjs.txt
|
Q:
Docker pull keeps stucking
I was learning Docker and when I executed the command
docker pull gcc
The screen stucked at a particular point making no progress. Please refer to the image below.
Stucked Docker Screen
I tried forcing it abort proccess and restarted it, but it was all same. I even tried the commmand
gcc pull python
but, it is of no use.
For refence my OS is windows 10.
Please help me understand the issue.
|
Docker pull keeps stucking
|
I was learning Docker and when I executed the command
docker pull gcc
The screen stucked at a particular point making no progress. Please refer to the image below.
Stucked Docker Screen
I tried forcing it abort proccess and restarted it, but it was all same. I even tried the commmand
gcc pull python
but, it is of no use.
For refence my OS is windows 10.
Please help me understand the issue.
|
[] |
[] |
[
"I somehow figured out that this error was happening because of a firewall settings, which my LAN service provider was using.\n"
] |
[
-1
] |
[
"docker",
"docker_image",
"docker_machine"
] |
stackoverflow_0074585742_docker_docker_image_docker_machine.txt
|
Q:
How to find node js location in mac M1?
I want to remove node to install it with nvm but I can't uninstall it.
I installed node twice, once I installed it wrong and the second time I did it right, I uninstalled one of the two nodes but the second time I don't know where it is and because of that I can't uninstall it.
I already try:
brew uninstall --force node
look tutorial how to uninstall node js from mac M1
Does anyone know how to find node location on mac M1 ?
A:
It is dependent by your packet manager.
For example, asdf version manager show info like that:
asdf info nodejs
OS:
Linux slonzal 5.19.0-26-generic #27-Ubuntu SMP PREEMPT_DYNAMIC Wed Nov 23 20:44:15 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
SHELL:
zsh 5.9 (x86_64-ubuntu-linux-gnu)
ASDF VERSION:
v0.10.2-7e7a1fa
ASDF ENVIRONMENT VARIABLES:
ASDF_DIR=/home/slon/.asdf
ASDF INSTALLED PLUGINS:
nodejs https://github.com/asdf-vm/asdf-nodejs.git
yarn https://github.com/twuni/asdf-yarn.git main 376c540
Other version managers have own syntax. Try to npm -h, if you use npm.
|
How to find node js location in mac M1?
|
I want to remove node to install it with nvm but I can't uninstall it.
I installed node twice, once I installed it wrong and the second time I did it right, I uninstalled one of the two nodes but the second time I don't know where it is and because of that I can't uninstall it.
I already try:
brew uninstall --force node
look tutorial how to uninstall node js from mac M1
Does anyone know how to find node location on mac M1 ?
|
[
"It is dependent by your packet manager.\nFor example, asdf version manager show info like that:\nasdf info nodejs\nOS:\nLinux slonzal 5.19.0-26-generic #27-Ubuntu SMP PREEMPT_DYNAMIC Wed Nov 23 20:44:15 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux\n\nSHELL:\nzsh 5.9 (x86_64-ubuntu-linux-gnu)\n\nASDF VERSION:\nv0.10.2-7e7a1fa\n\nASDF ENVIRONMENT VARIABLES:\nASDF_DIR=/home/slon/.asdf\n\nASDF INSTALLED PLUGINS:\nnodejs https://github.com/asdf-vm/asdf-nodejs.git \nyarn https://github.com/twuni/asdf-yarn.git main 376c540\n\nOther version managers have own syntax. Try to npm -h, if you use npm.\n"
] |
[
0
] |
[] |
[] |
[
"node.js"
] |
stackoverflow_0074658605_node.js.txt
|
Q:
Symfony3 + Twig : Join array with
I'm trying to display all elements inside an array and separate them with a line break, but I can't get it work.
Here is what I tried:
{{ user.roles | join('<br/>') }}
{{ user.roles | join('<br/>' | raw) }}
{{ user.roles | join('\n' | nl2br | raw) }}
Everytime I get something like:
ROLE_PARENT<br/>ROLE_ADMIN<br/>ROLE_MANAGER<br/>ROLE_USER
How can I tell twig to render <br/> as html ?
I could loop through the array but it's not the first time I tried to render html tag and I would like a definitive solution to this problem.
A:
A slight modification to your third attempt should do the trick as well.
{{ user.roles | join('\n')| nl2br }}
A:
I found the error, I'm not applying the filter at the right place, this works :
{{ user.roles | join('<br/>') | raw }}
A:
What about a loop ?
https://twig.sensiolabs.org/doc/2.x/tags/for.html
{%- for role in user.roles -%}
{{- role -}}<br>
{%- endfor -%}
Note: You can manage whitespaces with -
https://twig.sensiolabs.org/doc/2.x/templates.html#whitespace-control
A:
{{ user.roles | join("\n") | nl2br | raw }}
This will work always
|
Symfony3 + Twig : Join array with
|
I'm trying to display all elements inside an array and separate them with a line break, but I can't get it work.
Here is what I tried:
{{ user.roles | join('<br/>') }}
{{ user.roles | join('<br/>' | raw) }}
{{ user.roles | join('\n' | nl2br | raw) }}
Everytime I get something like:
ROLE_PARENT<br/>ROLE_ADMIN<br/>ROLE_MANAGER<br/>ROLE_USER
How can I tell twig to render <br/> as html ?
I could loop through the array but it's not the first time I tried to render html tag and I would like a definitive solution to this problem.
|
[
"A slight modification to your third attempt should do the trick as well.\n{{ user.roles | join('\\n')| nl2br }}\n\n",
"I found the error, I'm not applying the filter at the right place, this works : \n{{ user.roles | join('<br/>') | raw }}\n\n",
"What about a loop ?\nhttps://twig.sensiolabs.org/doc/2.x/tags/for.html\n{%- for role in user.roles -%}\n {{- role -}}<br>\n{%- endfor -%}\n\nNote: You can manage whitespaces with -\nhttps://twig.sensiolabs.org/doc/2.x/templates.html#whitespace-control\n",
"{{ user.roles | join(\"\\n\") | nl2br | raw }}\nThis will work always\n"
] |
[
16,
2,
0,
0
] |
[] |
[] |
[
"symfony",
"twig",
"twig_filter"
] |
stackoverflow_0043807798_symfony_twig_twig_filter.txt
|
Q:
How to find and replace two different string in double quotes?
I have this string
Expected "contains text 'asdr.js'" but got: "does not contain 'asdr.js'" (2s)
I wanted to replace the double quotes string, like this
Expected <span class="green-color-text">"contains text 'asdr.js'"</span> but got: <span class="red-color-text">"does not contain 'asdr.js'"</span> (2s)
I tried using the regex solution available in StackOverflow, but none of it worked for me.
|
How to find and replace two different string in double quotes?
|
I have this string
Expected "contains text 'asdr.js'" but got: "does not contain 'asdr.js'" (2s)
I wanted to replace the double quotes string, like this
Expected <span class="green-color-text">"contains text 'asdr.js'"</span> but got: <span class="red-color-text">"does not contain 'asdr.js'"</span> (2s)
I tried using the regex solution available in StackOverflow, but none of it worked for me.
|
[] |
[] |
[
"You can extract both quoted strings like this with match 1 and 2:\nconst regex = /(^\\\"contains text \\'.*\\'\\\"|^\\\"does not contain \\'.*\\'\\\")/gm;\n\n"
] |
[
-2
] |
[
"javascript",
"regex"
] |
stackoverflow_0074658442_javascript_regex.txt
|
Q:
How to change the color of braces in JSX & TSX in Intellij 2022.3
I just updated to version 2022.3 of Intellij Ultimate, and the color for braces in jsx/tsx code blocks has turned to yellow, but I want to revert it to the original color.enter image description hereIn this image, the curly braces around the values for the props are yellow but they should be the same color as the text for the prop names "isOpen" & "toggle"
I looked through the color settings for relevant languages but could not find anything that sounded like it would be the correct setting to change.
A:
Try editing Settings | Editor | Color Scheme | XML, Tag foreground color
|
How to change the color of braces in JSX & TSX in Intellij 2022.3
|
I just updated to version 2022.3 of Intellij Ultimate, and the color for braces in jsx/tsx code blocks has turned to yellow, but I want to revert it to the original color.enter image description hereIn this image, the curly braces around the values for the props are yellow but they should be the same color as the text for the prop names "isOpen" & "toggle"
I looked through the color settings for relevant languages but could not find anything that sounded like it would be the correct setting to change.
|
[
"Try editing Settings | Editor | Color Scheme | XML, Tag foreground color\n"
] |
[
0
] |
[] |
[] |
[
"braces",
"colors",
"intellij_idea",
"jsx",
"settings"
] |
stackoverflow_0074635780_braces_colors_intellij_idea_jsx_settings.txt
|
Q:
How to disable analyzing code coverage for libraries
When I run tests in CLion with code coverage, I get results not only for my code, but also for GTest library. Because of this CLion shows that only 25% of lines in my project are covered. Most of the lines are located in cmake-build-debug-coverage/googletest-src/googletest with code coverage of 24%. I've added GTest to CMake the way it is suggested in its repository:
if (BUILD_TESTING)
include(FetchContent)
FetchContent_Declare(
googletest
GIT_REPOSITORY https://github.com/google/googletest.git
GIT_TAG release-1.12.1
)
FetchContent_MakeAvailable(googletest)
add_executable(${PROJECT_NAME}_test
test/matrix_test.cpp
src/matrix.cpp)
target_compile_features(${PROJECT_NAME}_test PUBLIC cxx_std_17)
target_link_libraries(${PROJECT_NAME}_test GTest::gtest_main)
target_include_directories(${PROJECT_NAME}_test PRIVATE include)
include(GoogleTest)
gtest_discover_tests(${PROJECT_NAME}_test)
endif ()
How can I make CLion show code coverage only for my code, excluding any libraries?
A:
One way of solving this problem is to disable compiler flags -fprofile-instr-generate -fcoverage-mapping, which are added by CLion, during the build of GTest library. They are stored in CMAKE_CXX_FLAGS variable. So, this variable can be first cleared and then restored back like so:
if (BUILD_TESTING)
# Exclude gtest library from code coverage analyzer
set(${PROJECT_NAME}_CMAKE_CXX_FLAGS, ${CMAKE_CXX_FLAGS})
set(CMAKE_CXX_FLAGS "")
...
set(CMAKE_CXX_FLAGS ${${PROJECT_NAME}_CMAKE_CXX_FLAGS})
endif ()
After this code coverage shows 100% coverage for the project as expected
|
How to disable analyzing code coverage for libraries
|
When I run tests in CLion with code coverage, I get results not only for my code, but also for GTest library. Because of this CLion shows that only 25% of lines in my project are covered. Most of the lines are located in cmake-build-debug-coverage/googletest-src/googletest with code coverage of 24%. I've added GTest to CMake the way it is suggested in its repository:
if (BUILD_TESTING)
include(FetchContent)
FetchContent_Declare(
googletest
GIT_REPOSITORY https://github.com/google/googletest.git
GIT_TAG release-1.12.1
)
FetchContent_MakeAvailable(googletest)
add_executable(${PROJECT_NAME}_test
test/matrix_test.cpp
src/matrix.cpp)
target_compile_features(${PROJECT_NAME}_test PUBLIC cxx_std_17)
target_link_libraries(${PROJECT_NAME}_test GTest::gtest_main)
target_include_directories(${PROJECT_NAME}_test PRIVATE include)
include(GoogleTest)
gtest_discover_tests(${PROJECT_NAME}_test)
endif ()
How can I make CLion show code coverage only for my code, excluding any libraries?
|
[
"One way of solving this problem is to disable compiler flags -fprofile-instr-generate -fcoverage-mapping, which are added by CLion, during the build of GTest library. They are stored in CMAKE_CXX_FLAGS variable. So, this variable can be first cleared and then restored back like so:\nif (BUILD_TESTING)\n # Exclude gtest library from code coverage analyzer\n set(${PROJECT_NAME}_CMAKE_CXX_FLAGS, ${CMAKE_CXX_FLAGS})\n set(CMAKE_CXX_FLAGS \"\")\n\n ...\n\n set(CMAKE_CXX_FLAGS ${${PROJECT_NAME}_CMAKE_CXX_FLAGS})\nendif ()\n\nAfter this code coverage shows 100% coverage for the project as expected\n"
] |
[
0
] |
[] |
[] |
[
"c++",
"clion",
"cmake"
] |
stackoverflow_0074658690_c++_clion_cmake.txt
|
Q:
I am trying to install eas cli and am getting the error: zsh: command not found: eas
I am in the root folder and type:
yarn global add eas-cli
I then get:
yarn global v1.22.19
warning ../../package.json: No license field
[1/4] Resolving packages...
[2/4] Fetching packages...
[3/4] Linking dependencies...
[4/4] Building fresh packages...
success Installed "[email protected]" with binaries:
Then I go to use eas by typing eas login and I get:
zsh: command not found: eas
A:
For some reason yarn does not add it to commands, might be a path issue, set-up issue with yarn.
using npm it worked fine
npm install -g eas-cli
A:
For mac and none of the above work. Check if your .npmrc has something like this
prefix=/Users/<username>/.npm-global
you can then add this line to your .zshrc or .bashrc
export PATH=$PATH:~/.npm-global/bin
provided that you already installed eas or eas-cli
.npm-global screenshot
A:
Make sure that it is installed globally with the -g flag. This worked for me.
A:
I used this command with "npx" like;
npx eas init --id XXXXXXXXXXXXXXXXXXX
then its ok now.
|
I am trying to install eas cli and am getting the error: zsh: command not found: eas
|
I am in the root folder and type:
yarn global add eas-cli
I then get:
yarn global v1.22.19
warning ../../package.json: No license field
[1/4] Resolving packages...
[2/4] Fetching packages...
[3/4] Linking dependencies...
[4/4] Building fresh packages...
success Installed "[email protected]" with binaries:
Then I go to use eas by typing eas login and I get:
zsh: command not found: eas
|
[
"For some reason yarn does not add it to commands, might be a path issue, set-up issue with yarn.\nusing npm it worked fine\nnpm install -g eas-cli\n\n",
"For mac and none of the above work. Check if your .npmrc has something like this\nprefix=/Users/<username>/.npm-global\n\nyou can then add this line to your .zshrc or .bashrc\nexport PATH=$PATH:~/.npm-global/bin\n\nprovided that you already installed eas or eas-cli\n.npm-global screenshot\n",
"Make sure that it is installed globally with the -g flag. This worked for me.\n",
"I used this command with \"npx\" like;\nnpx eas init --id XXXXXXXXXXXXXXXXXXX\n\nthen its ok now.\n"
] |
[
2,
2,
0,
0
] |
[
"This is a permission error, frequently happens on Yarn. Use:\n$ sudo yarn global add eas-cli\n\ninstead.\n",
"it says\n$ eas [COMMAND]\n\nto use eas command.\nMaybe you can type $ first before typing eas [command]\nfor example,\n $eas login\n\nIt worked to me.\n"
] |
[
-1,
-1
] |
[
"eas",
"expo",
"react_native",
"sdk"
] |
stackoverflow_0072874829_eas_expo_react_native_sdk.txt
|
Q:
How to open an application in my application with click bottom (android Studio )
I would like to open an application in my application with ( jetpack compose, kotlin or java )
Do you think it's doable?
Currently, it opens it for me when leaving my application with this code
@Composable
fun HomeView(navController: NavController){
val context = LocalContext.current
intent.setPackage("com.whatsapp")
intent.setType("message/rfc822")
Button(onClick = {
context.startActivity(Intent.createChooser(intent,"choisir un app"))
}){
Text(text="open whatsapp like an iframe ")
}
}
an example of the result I want to have.
whatsapp is launched in my application with my appBar
This post
stackoverflow
from 6 years ago said it was not possible. But with the arrival of jetpack compose and recent versions of Android. I also wonder if that has changed.
A:
It is possible to open an external application within your application using jetpack compose and kotlin or java. You can use the following code to launch the external application:
@Composable
fun HomeView(navController: NavController){
val context = LocalContext.current
intent.setPackage("com.whatsapp")
intent.setType("message/rfc822")
Button(onClick = {
context.startActivity(Intent.createChooser(intent,"choisir un app"))
}){
Text(text="open whatsapp like an iframe ")
}
}
However, it may not be possible to display the external application within your application like an iframe, as it would require integration with the external application's code. It is best to check with the developers of the external application to see if this is possible.
|
How to open an application in my application with click bottom (android Studio )
|
I would like to open an application in my application with ( jetpack compose, kotlin or java )
Do you think it's doable?
Currently, it opens it for me when leaving my application with this code
@Composable
fun HomeView(navController: NavController){
val context = LocalContext.current
intent.setPackage("com.whatsapp")
intent.setType("message/rfc822")
Button(onClick = {
context.startActivity(Intent.createChooser(intent,"choisir un app"))
}){
Text(text="open whatsapp like an iframe ")
}
}
an example of the result I want to have.
whatsapp is launched in my application with my appBar
This post
stackoverflow
from 6 years ago said it was not possible. But with the arrival of jetpack compose and recent versions of Android. I also wonder if that has changed.
|
[
"It is possible to open an external application within your application using jetpack compose and kotlin or java. You can use the following code to launch the external application:\n@Composable\nfun HomeView(navController: NavController){\n\nval context = LocalContext.current\nintent.setPackage(\"com.whatsapp\")\nintent.setType(\"message/rfc822\")\nButton(onClick = {\ncontext.startActivity(Intent.createChooser(intent,\"choisir un app\"))\n}){\nText(text=\"open whatsapp like an iframe \")\n}\n}\n\nHowever, it may not be possible to display the external application within your application like an iframe, as it would require integration with the external application's code. It is best to check with the developers of the external application to see if this is possible.\n"
] |
[
0
] |
[] |
[] |
[
"android",
"android_jetpack",
"java",
"kotlin"
] |
stackoverflow_0074656948_android_android_jetpack_java_kotlin.txt
|
Q:
Django REST Framework: how to cache the result of SerializerMethodField?
I have a SerializerMethodField that make some heavy computation. I also use the same method into another SerializerMethodField of the same serializer.
How can I cache the result of the first one, so I run only once the heavy computation?
A:
Since the computation is common between two methods of the same serializer, you can use the cached_property decorator. This will cache the result of the method on the model instance, and the result will persist as long as the instance does.
from django.utils.functional import cached_property
class Person(models.Model):
@cached_property
def friends(self):
...
A:
With python version 3.8 and above you can now use the built-in caching decorators @cache and @cached_property
https://docs.python.org/3/library/functools.html#functools.cache
A:
I don't think you will be able to use the cached property or cache on the function itself on the SerializerMethodField but you can cache the result yourself at the instance level and then use the result if its set i.e.
from rest_framework import serializers
class WeatherLocationSerializer(serializers.ModelSerializer):
city = serializers.SerializerMethodField()
cached_city = None
def get_city(self, obj, include_counties=False):
if self.cached_city:
return self.cached_city
# Do your expensive calculation and parsing here
self.cached_city = self.obj['city']
return self.cached_city
I'm using a very old version of drf (djangorestframework==3.9.4) but this sort of class instance caching can be applied. Another way is to not use SerializerMethodField and instead do your parsing and calculations once in def to_representation(self, obj): and store the parts that repeat into a variable instead. That would look something like:
from rest_framework import serializers
class WeatherLocationSerializer(serializers.ModelSerializer):
def get_city(self, obj):
# This could be an expensive calculation this is just an example
return obj['city']
def to_representation(self, obj):
data = super().to_representation(obj)
city = self.get_city(obj)
# Now use your cached value of city where needed
data['city'] = city
data['same_city'] = city
return data
|
Django REST Framework: how to cache the result of SerializerMethodField?
|
I have a SerializerMethodField that make some heavy computation. I also use the same method into another SerializerMethodField of the same serializer.
How can I cache the result of the first one, so I run only once the heavy computation?
|
[
"Since the computation is common between two methods of the same serializer, you can use the cached_property decorator. This will cache the result of the method on the model instance, and the result will persist as long as the instance does.\nfrom django.utils.functional import cached_property\n\nclass Person(models.Model):\n\n @cached_property\n def friends(self):\n ...\n\n",
"With python version 3.8 and above you can now use the built-in caching decorators @cache and @cached_property\nhttps://docs.python.org/3/library/functools.html#functools.cache\n",
"I don't think you will be able to use the cached property or cache on the function itself on the SerializerMethodField but you can cache the result yourself at the instance level and then use the result if its set i.e.\nfrom rest_framework import serializers\n\nclass WeatherLocationSerializer(serializers.ModelSerializer):\n \n city = serializers.SerializerMethodField()\n\n cached_city = None\n\n\n def get_city(self, obj, include_counties=False):\n if self.cached_city:\n return self.cached_city\n \n # Do your expensive calculation and parsing here\n self.cached_city = self.obj['city']\n \n return self.cached_city\n\nI'm using a very old version of drf (djangorestframework==3.9.4) but this sort of class instance caching can be applied. Another way is to not use SerializerMethodField and instead do your parsing and calculations once in def to_representation(self, obj): and store the parts that repeat into a variable instead. That would look something like:\nfrom rest_framework import serializers\n\nclass WeatherLocationSerializer(serializers.ModelSerializer):\n \n def get_city(self, obj):\n # This could be an expensive calculation this is just an example\n return obj['city']\n \n def to_representation(self, obj):\n data = super().to_representation(obj)\n \n city = self.get_city(obj)\n \n # Now use your cached value of city where needed\n data['city'] = city\n data['same_city'] = city\n \n return data\n\n"
] |
[
2,
1,
0
] |
[] |
[] |
[
"django_rest_framework"
] |
stackoverflow_0046015919_django_rest_framework.txt
|
Q:
I am trying to read my kernel from disk using bios 13h interupt but for unknown reasons it fails even though the kernel gets loaded in ram
I am trying to make a bootloader and when i try to read my kernel from disk it shows
Reading from disk failed!
I have checked all the registers to the function and they appear to be correct as suggested on wikipedia
So i checked the memory address where the kernel is supposed to be loaded and find that it has been loaded correctly. Then i removed the error handling and tried to jump to the kernel location but it didn't execute
So i tried running it on bochs and it does jump to the kernel location and executes that code, but i don't seem to see any results (i run it on qemu and debug on bochs)
Main file
[bits 16]
[org 0x7c00]
%define ENDL 0x0D, 0x0A, 0
start: jmp boot
%include "bootloader/out.asm"
%include "bootloader/disk.asm"
halt:
hlt
jmp halt
boot:
xor ax, ax ; sets ax to 0
; sets segments to 0
mov ds, ax
mov es, ax
mov ss, ax
mov si, wlcm_msg
call print
mov dh, 1
mov bx, kernel
call read_kernel
jmp kernel
jmp halt
wlcm_msg: db "Booted to 16 bit real mode", ENDL
times 510-($-$$) db 0
dw 0xaa55
kernel: db 0
Disk routines file
;
; @params:
; dx(dl) - number of sectors to load
; bx - loaction in ram to store the read data
;
read_kernel:
pusha
push dx
mov ah, 02h
mov al, dh ; sectors to read
mov cl, 02h ; the sector to read 1 is our bootloader
mov ch, 0
mov dl, 0
mov dh, 0
int 13h
jc read_err ; if carry flag is set then there is an error
pop dx
cmp al, dh ; sector read count
jne sector_err
popa
ret
read_err:
mov si, read_err_msg
call print
mov dh, ah
jmp halt
sector_err:
mov si, sector_err_msg
call print
jmp halt
read_err_msg: db "Reading from disk failed!", ENDL
sector_err_msg: db "Incorrect number of sectors to read!", ENDL
Kernel
[bits 16]
%define ENDL 0x0D, 0x0A, 0
start: jmp main
main:
mov ah, 9
xor bh, bh
mov cx, 1
mov al, 'E'
int 10h
cli
hlt
Build script
ASM = nasm
BOOT_SRC = bootloader/main.asm
KENREL_SRC = kernel/main.asm
BUILD_DIR = build
.PHONY: all clean create run debug
all: $(BUILD_DIR)/disk.img
$(BUILD_DIR)/disk.img: $(BUILD_DIR)/bootloader.bin $(BUILD_DIR)/kernel.bin
dd if=/dev/zero of=$@ bs=512 count=2880
dd if=$< of=$@ bs=512 conv=notrunc seek=0
dd if=$(word 2,$^) of=$@ bs=512 conv=notrunc seek=1
$(BUILD_DIR)/bootloader.bin: $(BOOT_SRC) create
$(ASM) -f bin $< -o $@
$(BUILD_DIR)/kernel.bin: $(KENREL_SRC) create
$(ASM) -f bin $< -o $@
create:
mkdir -p $(BUILD_DIR)
clean:
rm -rf $(BUILD_DIR)/*
run: $(BUILD_DIR)/disk.img
qemu-system-i386 -machine q35 -drive file=$<,format=raw
run_floppy: $(BUILD_DIR)/disk.img
qemu-system-i386 -machine q35 -fda $<
debug: bochs_config $(BUILD_DIR)/disk.img
bochs -f $<
I tried lot of ways to fix this but none seem to work, so i am pretty much convinced that this is an issue with my build script or the way i set up segments. I've found many questions with the same problem here, but i didn't really understood how that would apply here. Thank you for the help!
A:
You have this line in your code:
mov ss, ax
The SS register is linked to SP. If SS changes, then the same SP will 99.9% of the time not make sense in the new segment, so it needs to change too!
Add a line such as:
mov sp, 0x7c00
Since your bootloader is located in the region 0x7C00-0x7DFF, placing the stack at 0x7C00 (growing downwards) is the most common choice (but feel free to choose anything else that doesn't conflict already existing data such as the interrupt table, the BDA or your own code).
Setting sp should be done in the next instruction after the mov ss, because the CPU disables interrupts briefly after so that you have time to set sp without interrupts corrupting the stack (this behavior is implemented on all x86 processors, save for a few old steppings of the ancient 8088, for which you have to enclose the stack change in cli-sti).
If you already know that the machine you are working with has a 32-bit CPU (meaning you have access to EAX, EBX etc), you can also use the lss sp, [mem] instruction, which loads the ss:sp pair from the memory location in a single instruction. However, it is best to avoid this in bootloader code (because the machine's hardware is unknown at this stage in the boot process).
|
I am trying to read my kernel from disk using bios 13h interupt but for unknown reasons it fails even though the kernel gets loaded in ram
|
I am trying to make a bootloader and when i try to read my kernel from disk it shows
Reading from disk failed!
I have checked all the registers to the function and they appear to be correct as suggested on wikipedia
So i checked the memory address where the kernel is supposed to be loaded and find that it has been loaded correctly. Then i removed the error handling and tried to jump to the kernel location but it didn't execute
So i tried running it on bochs and it does jump to the kernel location and executes that code, but i don't seem to see any results (i run it on qemu and debug on bochs)
Main file
[bits 16]
[org 0x7c00]
%define ENDL 0x0D, 0x0A, 0
start: jmp boot
%include "bootloader/out.asm"
%include "bootloader/disk.asm"
halt:
hlt
jmp halt
boot:
xor ax, ax ; sets ax to 0
; sets segments to 0
mov ds, ax
mov es, ax
mov ss, ax
mov si, wlcm_msg
call print
mov dh, 1
mov bx, kernel
call read_kernel
jmp kernel
jmp halt
wlcm_msg: db "Booted to 16 bit real mode", ENDL
times 510-($-$$) db 0
dw 0xaa55
kernel: db 0
Disk routines file
;
; @params:
; dx(dl) - number of sectors to load
; bx - loaction in ram to store the read data
;
read_kernel:
pusha
push dx
mov ah, 02h
mov al, dh ; sectors to read
mov cl, 02h ; the sector to read 1 is our bootloader
mov ch, 0
mov dl, 0
mov dh, 0
int 13h
jc read_err ; if carry flag is set then there is an error
pop dx
cmp al, dh ; sector read count
jne sector_err
popa
ret
read_err:
mov si, read_err_msg
call print
mov dh, ah
jmp halt
sector_err:
mov si, sector_err_msg
call print
jmp halt
read_err_msg: db "Reading from disk failed!", ENDL
sector_err_msg: db "Incorrect number of sectors to read!", ENDL
Kernel
[bits 16]
%define ENDL 0x0D, 0x0A, 0
start: jmp main
main:
mov ah, 9
xor bh, bh
mov cx, 1
mov al, 'E'
int 10h
cli
hlt
Build script
ASM = nasm
BOOT_SRC = bootloader/main.asm
KENREL_SRC = kernel/main.asm
BUILD_DIR = build
.PHONY: all clean create run debug
all: $(BUILD_DIR)/disk.img
$(BUILD_DIR)/disk.img: $(BUILD_DIR)/bootloader.bin $(BUILD_DIR)/kernel.bin
dd if=/dev/zero of=$@ bs=512 count=2880
dd if=$< of=$@ bs=512 conv=notrunc seek=0
dd if=$(word 2,$^) of=$@ bs=512 conv=notrunc seek=1
$(BUILD_DIR)/bootloader.bin: $(BOOT_SRC) create
$(ASM) -f bin $< -o $@
$(BUILD_DIR)/kernel.bin: $(KENREL_SRC) create
$(ASM) -f bin $< -o $@
create:
mkdir -p $(BUILD_DIR)
clean:
rm -rf $(BUILD_DIR)/*
run: $(BUILD_DIR)/disk.img
qemu-system-i386 -machine q35 -drive file=$<,format=raw
run_floppy: $(BUILD_DIR)/disk.img
qemu-system-i386 -machine q35 -fda $<
debug: bochs_config $(BUILD_DIR)/disk.img
bochs -f $<
I tried lot of ways to fix this but none seem to work, so i am pretty much convinced that this is an issue with my build script or the way i set up segments. I've found many questions with the same problem here, but i didn't really understood how that would apply here. Thank you for the help!
|
[
"You have this line in your code:\nmov ss, ax\n\nThe SS register is linked to SP. If SS changes, then the same SP will 99.9% of the time not make sense in the new segment, so it needs to change too!\nAdd a line such as:\nmov sp, 0x7c00\n\nSince your bootloader is located in the region 0x7C00-0x7DFF, placing the stack at 0x7C00 (growing downwards) is the most common choice (but feel free to choose anything else that doesn't conflict already existing data such as the interrupt table, the BDA or your own code).\nSetting sp should be done in the next instruction after the mov ss, because the CPU disables interrupts briefly after so that you have time to set sp without interrupts corrupting the stack (this behavior is implemented on all x86 processors, save for a few old steppings of the ancient 8088, for which you have to enclose the stack change in cli-sti).\nIf you already know that the machine you are working with has a 32-bit CPU (meaning you have access to EAX, EBX etc), you can also use the lss sp, [mem] instruction, which loads the ss:sp pair from the memory location in a single instruction. However, it is best to avoid this in bootloader code (because the machine's hardware is unknown at this stage in the boot process).\n"
] |
[
2
] |
[] |
[] |
[
"bios",
"bootloader",
"disk",
"x86"
] |
stackoverflow_0074656354_bios_bootloader_disk_x86.txt
|
Q:
Kotlin Retrofit. Getting information from nested JSON from API
I'm trying to access the information stored in the "recipe" part of this json using kotlin and retrofit, but can't figure out how to do that.
`
{
"from": 0,
"to": 0,
"count": 0,
"_links": {
"self": {
"href": "string",
"title": "string"
},
"next": {
"href": "string",
"title": "string"
}
},
"hits": [
{
"recipe": {
"uri": "string",
"label": "string",
"image": "string",
"images": {
"THUMBNAIL": {
"url": "string",
"width": 0,
"height": 0
},
"SMALL": {
"url": "string",
"width": 0,
"height": 0
},
"REGULAR": {
"url": "string",
"width": 0,
"height": 0
},
"LARGE": {
"url": "string",
"width": 0,
"height": 0
}
},
"source": "string",
"url": "string",
"shareAs": "string",
"yield": 0,
"dietLabels": [
"string"
],
"healthLabels": [
"string"
],
"cautions": [
"string"
],
"ingredientLines": [
"string"
],
"ingredients": [
{
"text": "string",
"quantity": 0,
"measure": "string",
"food": "string",
"weight": 0,
"foodId": "string"
}
],
"calories": 0,
"glycemicIndex": 0,
"totalCO2Emissions": 0,
"co2EmissionsClass": "A+",
"totalWeight": 0,
"cuisineType": [
"string"
],
"mealType": [
"string"
],
"dishType": [
"string"
],
"instructions": [
"string"
],
"tags": [
"string"
],
"externalId": "string",
"totalNutrients": {},
"totalDaily": {},
"digest": [
{
"label": "string",
"tag": "string",
"schemaOrgTag": "string",
"total": 0,
"hasRDI": true,
"daily": 0,
"unit": "string",
"sub": {}
}
]
},
"_links": {
"self": {
"href": "string",
"title": "string"
},
"next": {
"href": "string",
"title": "string"
}
}
}
]
}
`
I've tried to follow a tutorial on YouTube, but the API that is used there is much simpler than this..So I'm confused. My MainActivity:
`
import androidx.appcompat.app.AppCompatActivity
import android.os.Bundle
import android.util.Log
import androidx.recyclerview.widget.RecyclerView
import androidx.recyclerview.widget.StaggeredGridLayoutManager
import retrofit2.Call
import retrofit2.Callback
import retrofit2.Response
class MainActivity : AppCompatActivity() {
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
val client = ApiClient.apiService.fetchRecipes("MyKey", "MyAppID", "Pizza")
client.enqueue(object: Callback<RecipeResponse>{
override fun onResponse(
call: Call<RecipeResponse>,
response: Response<RecipeResponse>
) {
if(response.isSuccessful){
Log.d("characters", ""+response.body())
val result = response.body()?.result
result?.let {
val adapter = MainAdapter(result)
val recyclerView = findViewById<RecyclerView>(R.id.recipesRv)
recyclerView?.layoutManager = StaggeredGridLayoutManager(2, StaggeredGridLayoutManager.VERTICAL)
recyclerView?.adapter = adapter
}
}
}
override fun onFailure(call: Call<RecipeResponse>, t: Throwable){
Log.e("failed", ""+t.message)
}
})
}
}
`
My Recipe.kt file:
`
import com.squareup.moshi.Json
data class Recipe (
@Json(name = "label")
val label: String,
@Json(name="image")
val image: String,
@Json(name = "url")
val url: String,
@Json(name="mealType")
val mealType: String
)
data class RecipeResponse(@Json(name="recipe")
val result : List<Recipe>)
My ApiClient.kt:
import com.squareup.moshi.Moshi
import com.squareup.moshi.kotlin.reflect.KotlinJsonAdapterFactory
import retrofit2.Call
import retrofit2.Retrofit
import retrofit2.converter.moshi.MoshiConverterFactory
import retrofit2.http.GET
import retrofit2.http.Query
object ApiClient {
private val BASE_URL = "https://api.edamam.com/v2/"
private val moshi = Moshi.Builder().add(KotlinJsonAdapterFactory()).build()
private val retrofit: Retrofit by lazy{
Retrofit.Builder()
.baseUrl(BASE_URL)
.addConverterFactory(MoshiConverterFactory.create(moshi))
.build()
}
val apiService: ApiService by lazy{
retrofit.create(ApiService::class.java)
}
}
interface ApiService{
@GET("recipes")
fun fetchRecipes(
@Query("app_key") key: String,
@Query("app_id") id: String,
@Query("q")q:String): Call<RecipeResponse>
}
`
and lastly my MainAdapter (for using recyclerview):
`
import android.view.LayoutInflater
import android.view.View
import android.view.ViewGroup
import android.widget.ImageView
import android.widget.TextView
import androidx.recyclerview.widget.RecyclerView
import coil.load
import coil.transform.CircleCropTransformation
class MainAdapter(val recipeList: List<Recipe>):
RecyclerView.Adapter<MainAdapter.MainViewHolder>() {
inner class MainViewHolder(private val itemView: View): RecyclerView.ViewHolder(itemView){
fun bindData(recipe: Recipe){
val label = itemView.findViewById<TextView>(R.id.label)
//val image = itemView.findViewById<ImageView>(R.id.image)
label.text = recipe.label
}
}
override fun onCreateViewHolder(parent: ViewGroup, viewType: Int): MainViewHolder {
return MainViewHolder(LayoutInflater.from(parent.context).inflate(R.layout.rv_item, parent, false))
}
override fun onBindViewHolder(holder: MainViewHolder, position: Int) {
holder.bindData(recipeList[position])
}
override fun getItemCount(): Int {
return recipeList.size
}
}
`
I'm a beginner in Kotlin and object oriented programming, so this is a bit overwhelming. Please excuse the long question.
A:
By looking at the JSON schema "recipe" is the property of the elements in the "hits" Array.
{
...
"hits": [
{
"recipe": {
...
}
So, to access the "recipe" first you need to create the data class for the response body.
data class Response (
...
val hits: List<Hit>,
...
)
data class Hit (
val recipe: Recipe,
...
)
|
Kotlin Retrofit. Getting information from nested JSON from API
|
I'm trying to access the information stored in the "recipe" part of this json using kotlin and retrofit, but can't figure out how to do that.
`
{
"from": 0,
"to": 0,
"count": 0,
"_links": {
"self": {
"href": "string",
"title": "string"
},
"next": {
"href": "string",
"title": "string"
}
},
"hits": [
{
"recipe": {
"uri": "string",
"label": "string",
"image": "string",
"images": {
"THUMBNAIL": {
"url": "string",
"width": 0,
"height": 0
},
"SMALL": {
"url": "string",
"width": 0,
"height": 0
},
"REGULAR": {
"url": "string",
"width": 0,
"height": 0
},
"LARGE": {
"url": "string",
"width": 0,
"height": 0
}
},
"source": "string",
"url": "string",
"shareAs": "string",
"yield": 0,
"dietLabels": [
"string"
],
"healthLabels": [
"string"
],
"cautions": [
"string"
],
"ingredientLines": [
"string"
],
"ingredients": [
{
"text": "string",
"quantity": 0,
"measure": "string",
"food": "string",
"weight": 0,
"foodId": "string"
}
],
"calories": 0,
"glycemicIndex": 0,
"totalCO2Emissions": 0,
"co2EmissionsClass": "A+",
"totalWeight": 0,
"cuisineType": [
"string"
],
"mealType": [
"string"
],
"dishType": [
"string"
],
"instructions": [
"string"
],
"tags": [
"string"
],
"externalId": "string",
"totalNutrients": {},
"totalDaily": {},
"digest": [
{
"label": "string",
"tag": "string",
"schemaOrgTag": "string",
"total": 0,
"hasRDI": true,
"daily": 0,
"unit": "string",
"sub": {}
}
]
},
"_links": {
"self": {
"href": "string",
"title": "string"
},
"next": {
"href": "string",
"title": "string"
}
}
}
]
}
`
I've tried to follow a tutorial on YouTube, but the API that is used there is much simpler than this..So I'm confused. My MainActivity:
`
import androidx.appcompat.app.AppCompatActivity
import android.os.Bundle
import android.util.Log
import androidx.recyclerview.widget.RecyclerView
import androidx.recyclerview.widget.StaggeredGridLayoutManager
import retrofit2.Call
import retrofit2.Callback
import retrofit2.Response
class MainActivity : AppCompatActivity() {
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
val client = ApiClient.apiService.fetchRecipes("MyKey", "MyAppID", "Pizza")
client.enqueue(object: Callback<RecipeResponse>{
override fun onResponse(
call: Call<RecipeResponse>,
response: Response<RecipeResponse>
) {
if(response.isSuccessful){
Log.d("characters", ""+response.body())
val result = response.body()?.result
result?.let {
val adapter = MainAdapter(result)
val recyclerView = findViewById<RecyclerView>(R.id.recipesRv)
recyclerView?.layoutManager = StaggeredGridLayoutManager(2, StaggeredGridLayoutManager.VERTICAL)
recyclerView?.adapter = adapter
}
}
}
override fun onFailure(call: Call<RecipeResponse>, t: Throwable){
Log.e("failed", ""+t.message)
}
})
}
}
`
My Recipe.kt file:
`
import com.squareup.moshi.Json
data class Recipe (
@Json(name = "label")
val label: String,
@Json(name="image")
val image: String,
@Json(name = "url")
val url: String,
@Json(name="mealType")
val mealType: String
)
data class RecipeResponse(@Json(name="recipe")
val result : List<Recipe>)
My ApiClient.kt:
import com.squareup.moshi.Moshi
import com.squareup.moshi.kotlin.reflect.KotlinJsonAdapterFactory
import retrofit2.Call
import retrofit2.Retrofit
import retrofit2.converter.moshi.MoshiConverterFactory
import retrofit2.http.GET
import retrofit2.http.Query
object ApiClient {
private val BASE_URL = "https://api.edamam.com/v2/"
private val moshi = Moshi.Builder().add(KotlinJsonAdapterFactory()).build()
private val retrofit: Retrofit by lazy{
Retrofit.Builder()
.baseUrl(BASE_URL)
.addConverterFactory(MoshiConverterFactory.create(moshi))
.build()
}
val apiService: ApiService by lazy{
retrofit.create(ApiService::class.java)
}
}
interface ApiService{
@GET("recipes")
fun fetchRecipes(
@Query("app_key") key: String,
@Query("app_id") id: String,
@Query("q")q:String): Call<RecipeResponse>
}
`
and lastly my MainAdapter (for using recyclerview):
`
import android.view.LayoutInflater
import android.view.View
import android.view.ViewGroup
import android.widget.ImageView
import android.widget.TextView
import androidx.recyclerview.widget.RecyclerView
import coil.load
import coil.transform.CircleCropTransformation
class MainAdapter(val recipeList: List<Recipe>):
RecyclerView.Adapter<MainAdapter.MainViewHolder>() {
inner class MainViewHolder(private val itemView: View): RecyclerView.ViewHolder(itemView){
fun bindData(recipe: Recipe){
val label = itemView.findViewById<TextView>(R.id.label)
//val image = itemView.findViewById<ImageView>(R.id.image)
label.text = recipe.label
}
}
override fun onCreateViewHolder(parent: ViewGroup, viewType: Int): MainViewHolder {
return MainViewHolder(LayoutInflater.from(parent.context).inflate(R.layout.rv_item, parent, false))
}
override fun onBindViewHolder(holder: MainViewHolder, position: Int) {
holder.bindData(recipeList[position])
}
override fun getItemCount(): Int {
return recipeList.size
}
}
`
I'm a beginner in Kotlin and object oriented programming, so this is a bit overwhelming. Please excuse the long question.
|
[
"By looking at the JSON schema \"recipe\" is the property of the elements in the \"hits\" Array.\n{\n ...\n \"hits\": [\n {\n \"recipe\": {\n ...\n}\n\nSo, to access the \"recipe\" first you need to create the data class for the response body.\ndata class Response (\n ...\n val hits: List<Hit>,\n ...\n)\n\ndata class Hit (\n val recipe: Recipe,\n ...\n)\n\n"
] |
[
0
] |
[] |
[] |
[
"android_recyclerview",
"json",
"kotlin",
"retrofit"
] |
stackoverflow_0074658212_android_recyclerview_json_kotlin_retrofit.txt
|
Q:
Prevent useEffect from triggering under certain conditions
I got 2 Lists with the same content. The difference being: one is a normal List, the other is a detailed List with each List Entry having a nested List corresponding to them.
For Example:
Example 1:
normal List detailed List
House 1 House 1
House 2 Resident 1
Resident 2
House 2
Resident 3
Now i want to highlight the List Entry you click on in both Lists (and if you click on a nested List's entry it should highlight the corresponding parent entry in the normalList. eg: House 1 if you selected Resident 1 in the example above.)
Now i implemented it like this:
function detailedList(props){
const context = useContext(ApplicationContext);
const [selectedItem,setSelectedItem]=useState({id:null , level:"top",parent:null)
const handleClick=(id,type,parent)=>{
//changes selected item if item in detailedList is clicked.
//if selected item is already a parent element then id equals parent
setSelectedItem({id:id,type:type,parent:parent});
context.setSelectedId(parent); //Problem Line
}
React.useEffect(() => {
//changes selected item if item in normalList is clicked.
setSelectedItem({
id: context.selectedId,
level: "top",
parent: context.selectedId,
});
}, [context.selectedId]);
return (
...
//Renders the List
)
}
//normalList just uses the context.selectedId to determine selected Item
So if, for example, you selected Resident 2 (see Example 1), selectedItem would be {id:2,level="nested",parent:1} and context.selectedId would be 1.
The Problem now is: If you select a nested Item on the detailed List, which parent wasn't already selected, it selects the parent instead. That's because the handleClick function sets context.selectedId and therefore triggers the useEffect.
So i would need a way to only trigger the useEffect if the change has its origin outside the component or something.
Edit:
I made a fiddle to illustrate:
https://jsfiddle.net/hvf0s9t2/8/
Try selecting House 1 first and Person 5 second :/
A:
I solved it kinda ugly, so i would be glad about any advice to do it cleaner.
This is what i did:
function detailedList(props){
const [isUpdating, setUpdating] = useState(false); //new state
const context = useContext(ApplicationContext);
const [selectedItem,setSelectedItem]=useState({id:null , level:"top",parent:null)
const handleClick= async (id,type,parent)=>{
await setUpdating(true)
await setSelectedItem({id:id,type:type,parent:parent});
await context.setSelectedId(parent);
await setUpdating(false)
}
React.useEffect(() => {
if(!isUpdating){// Added Check. Only executes if origin was not this component.
setSelectedItem({
id: context.selectedId,
level: "top",
parent: context.selectedId,
});
}
}, [context.selectedId]);
|
Prevent useEffect from triggering under certain conditions
|
I got 2 Lists with the same content. The difference being: one is a normal List, the other is a detailed List with each List Entry having a nested List corresponding to them.
For Example:
Example 1:
normal List detailed List
House 1 House 1
House 2 Resident 1
Resident 2
House 2
Resident 3
Now i want to highlight the List Entry you click on in both Lists (and if you click on a nested List's entry it should highlight the corresponding parent entry in the normalList. eg: House 1 if you selected Resident 1 in the example above.)
Now i implemented it like this:
function detailedList(props){
const context = useContext(ApplicationContext);
const [selectedItem,setSelectedItem]=useState({id:null , level:"top",parent:null)
const handleClick=(id,type,parent)=>{
//changes selected item if item in detailedList is clicked.
//if selected item is already a parent element then id equals parent
setSelectedItem({id:id,type:type,parent:parent});
context.setSelectedId(parent); //Problem Line
}
React.useEffect(() => {
//changes selected item if item in normalList is clicked.
setSelectedItem({
id: context.selectedId,
level: "top",
parent: context.selectedId,
});
}, [context.selectedId]);
return (
...
//Renders the List
)
}
//normalList just uses the context.selectedId to determine selected Item
So if, for example, you selected Resident 2 (see Example 1), selectedItem would be {id:2,level="nested",parent:1} and context.selectedId would be 1.
The Problem now is: If you select a nested Item on the detailed List, which parent wasn't already selected, it selects the parent instead. That's because the handleClick function sets context.selectedId and therefore triggers the useEffect.
So i would need a way to only trigger the useEffect if the change has its origin outside the component or something.
Edit:
I made a fiddle to illustrate:
https://jsfiddle.net/hvf0s9t2/8/
Try selecting House 1 first and Person 5 second :/
|
[
"I solved it kinda ugly, so i would be glad about any advice to do it cleaner.\nThis is what i did:\nfunction detailedList(props){ \n const [isUpdating, setUpdating] = useState(false); //new state\n const context = useContext(ApplicationContext);\n const [selectedItem,setSelectedItem]=useState({id:null , level:\"top\",parent:null)\n\n const handleClick= async (id,type,parent)=>{ \n await setUpdating(true)\n await setSelectedItem({id:id,type:type,parent:parent});\n await context.setSelectedId(parent); \n await setUpdating(false)\n\n }\n\n React.useEffect(() => {\n if(!isUpdating){// Added Check. Only executes if origin was not this component.\n setSelectedItem({\n id: context.selectedId,\n level: \"top\",\n parent: context.selectedId,\n });\n }\n }, [context.selectedId]); \n\n\n\n"
] |
[
0
] |
[] |
[] |
[
"reactjs"
] |
stackoverflow_0074657448_reactjs.txt
|
Q:
How do I make parcel inline sourcemaps?
By default parcel put source map in a separate file. How do I tell parcel to put source map in the same output file? I guess it is called inline source maps?
A:
See this section of the documentation: https://parceljs.org/features/targets/#sourcemap.
In your package.json, add the following:
{
"targets": {
"default": {
"sourceMap": {
"inline": true
}
}
}
}
Note that you may need to change "default" to match your own target name, if you've configured other options.
|
How do I make parcel inline sourcemaps?
|
By default parcel put source map in a separate file. How do I tell parcel to put source map in the same output file? I guess it is called inline source maps?
|
[
"See this section of the documentation: https://parceljs.org/features/targets/#sourcemap.\nIn your package.json, add the following:\n{\n \"targets\": {\n \"default\": {\n \"sourceMap\": {\n \"inline\": true\n }\n }\n }\n}\n\nNote that you may need to change \"default\" to match your own target name, if you've configured other options.\n"
] |
[
2
] |
[] |
[] |
[
"parceljs"
] |
stackoverflow_0074651747_parceljs.txt
|
Q:
How make np.roll working faster for one dimension array?
I generate a two zero arrays by np.zero then i use np.roll to make circshifting array. But when i calling np.roll in cycle it works very slow. Is there any way to speed up my code?
Here is the code:
preamble_length = 256
threshold_level = 100
sample_rate = 750e3
decimation_factor = 6
preamble_combination = [1,-1, 1, 1, 1, -1, -1, -1, 1, -1, 1, 1, -1, 1, 1, 1, 1, -1, 1, 1, 1, -1, -1, -1, -1, 1, -1, -1, 1, -1, -1,-1, 1, -1, 1, 1, 1, -1, -1, -1, 1, -1, 1, 1, -1, 1, 1, 1, -1, 1, -1, -1, -1, 1, 1, 1, 1, -1, 1, 1, -1, 1, 1, 1, 1, -1, 1, 1, 1, -1, -1, -1, 1, -1, 1, 1, -1, 1, 1, 1, 1, -1, 1, 1, 1, -1, -1, -1, -1, 1, -1, -1, 1, -1, 1, -1, -1, 1, -1, -1, -1, 1, 1, 1, -1, 1, -1, -1, 1, -1, -1, -1, 1, -1, 1, 1, 1, -1, -1, -1, -1, 1, -1, -1, 1, -1, -1, -1, 1, -1, 1, 1, 1, -1, -1, -1, 1, -1, 1, 1, -1, 1, 1, 1, 1, -1, 1, 1, 1, -1, -1, -1, -1, 1, -1, -1, 1, -1, -1, -1, 1, -1, 1, 1, 1, -1, -1, -1, 1, -1, 1, 1, -1, 1, 1, 1, -1, 1, -1, -1, -1, 1, 1, 1, 1, -1, 1, 1, -1, 1, 1, 1, -1, 1, -1, -1, -1, 1, 1, 1, -1, 1, -1, -1, 1, -1, -1, -1, -1, 1, -1, -1, -1, 1, 1, 1, 1, -1, 1, 1, -1, 1, 1,1, 1, -1, 1, 1, 1, -1, -1, -1, 1, -1, 1, 1, -1, 1, 1, 1, -1, 1, -1, -1, -1, 1, 1, 1, 1, -1, 1, 1, -1, 1, 1,1]
sequence = np.zeros(preamble_length)
buffer_filter = np.zeros(preamble_length)
size_array = sample_rate / decimation_factor
rxDataReal = np.real(downsample(rxData, decimation_factor)) #rxData is a array of complex numbers
rxDataDownSampled = rxDataReal
check = 0
find_max = 0
peak_max = 0
preamble_ready = 0
received_flag = False
size_array = int(size_array)
main_counter = 0
#In this section the np.roll working very slow
for main_counter in range(size_array):
if(preamble_ready == 0):
if(rxDataDownSampled[main_counter] < 0):
check_sign = 1
else:
check_sign = -1
sequence = np.roll(sequence, -1)#this
buffer_filter = np.roll(buffer_filter, -1)#and this
sequence[preamble_length-1] = check_sign
bufferSum = sequence * preamble_combination
buffer_filter[preamble_length-1] = np.sum(bufferSum)
find_max = np.max(buffer_filter)
if(find_max >= threshold_level):
peak_max = find_max
sequence = np.zeros(preamble_ready)
buffer_filter = np.zeros(preamble_length)
print('Value of peak_max: ', peak_max)
received_flag = True
if(received_flag==True):
break
preamble_value = peak_max
A:
So your roll are doing:
In [118]: x=np.arange(10)
In [119]: np.roll(x,-1)
Out[119]: array([1, 2, 3, 4, 5, 6, 7, 8, 9, 0])
You can look at the np.roll code; it's probably more general, it has to, in one way or other, copy all the values of x to a new array. This might be a bit faster, since it doesn't try to be as general:
In [120]: y=np.zeros_like(x)
...: y[:-1] = x[1:]; y[-1] = x[0]
In [121]: y
Out[121]: array([1, 2, 3, 4, 5, 6, 7, 8, 9, 0])
times
Nope, it isn't faster:
In [122]: x=np.arange(100000)
In [123]: timeit np.roll(x,-1)
82.1 µs ± 102 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
In [124]: %%timeit
...: y=np.zeros_like(x)
...: y[:-1] = x[1:]; y[-1] = x[0]
...:
...:
93.4 µs ± 4.84 µs per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
other timings:
In [128]: timeit y=x[1:].copy()
52.4 µs ± 164 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
In [129]: timeit np.concatenate((x[1:],x[0:1]))
58.6 µs ± 289 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
|
How make np.roll working faster for one dimension array?
|
I generate a two zero arrays by np.zero then i use np.roll to make circshifting array. But when i calling np.roll in cycle it works very slow. Is there any way to speed up my code?
Here is the code:
preamble_length = 256
threshold_level = 100
sample_rate = 750e3
decimation_factor = 6
preamble_combination = [1,-1, 1, 1, 1, -1, -1, -1, 1, -1, 1, 1, -1, 1, 1, 1, 1, -1, 1, 1, 1, -1, -1, -1, -1, 1, -1, -1, 1, -1, -1,-1, 1, -1, 1, 1, 1, -1, -1, -1, 1, -1, 1, 1, -1, 1, 1, 1, -1, 1, -1, -1, -1, 1, 1, 1, 1, -1, 1, 1, -1, 1, 1, 1, 1, -1, 1, 1, 1, -1, -1, -1, 1, -1, 1, 1, -1, 1, 1, 1, 1, -1, 1, 1, 1, -1, -1, -1, -1, 1, -1, -1, 1, -1, 1, -1, -1, 1, -1, -1, -1, 1, 1, 1, -1, 1, -1, -1, 1, -1, -1, -1, 1, -1, 1, 1, 1, -1, -1, -1, -1, 1, -1, -1, 1, -1, -1, -1, 1, -1, 1, 1, 1, -1, -1, -1, 1, -1, 1, 1, -1, 1, 1, 1, 1, -1, 1, 1, 1, -1, -1, -1, -1, 1, -1, -1, 1, -1, -1, -1, 1, -1, 1, 1, 1, -1, -1, -1, 1, -1, 1, 1, -1, 1, 1, 1, -1, 1, -1, -1, -1, 1, 1, 1, 1, -1, 1, 1, -1, 1, 1, 1, -1, 1, -1, -1, -1, 1, 1, 1, -1, 1, -1, -1, 1, -1, -1, -1, -1, 1, -1, -1, -1, 1, 1, 1, 1, -1, 1, 1, -1, 1, 1,1, 1, -1, 1, 1, 1, -1, -1, -1, 1, -1, 1, 1, -1, 1, 1, 1, -1, 1, -1, -1, -1, 1, 1, 1, 1, -1, 1, 1, -1, 1, 1,1]
sequence = np.zeros(preamble_length)
buffer_filter = np.zeros(preamble_length)
size_array = sample_rate / decimation_factor
rxDataReal = np.real(downsample(rxData, decimation_factor)) #rxData is a array of complex numbers
rxDataDownSampled = rxDataReal
check = 0
find_max = 0
peak_max = 0
preamble_ready = 0
received_flag = False
size_array = int(size_array)
main_counter = 0
#In this section the np.roll working very slow
for main_counter in range(size_array):
if(preamble_ready == 0):
if(rxDataDownSampled[main_counter] < 0):
check_sign = 1
else:
check_sign = -1
sequence = np.roll(sequence, -1)#this
buffer_filter = np.roll(buffer_filter, -1)#and this
sequence[preamble_length-1] = check_sign
bufferSum = sequence * preamble_combination
buffer_filter[preamble_length-1] = np.sum(bufferSum)
find_max = np.max(buffer_filter)
if(find_max >= threshold_level):
peak_max = find_max
sequence = np.zeros(preamble_ready)
buffer_filter = np.zeros(preamble_length)
print('Value of peak_max: ', peak_max)
received_flag = True
if(received_flag==True):
break
preamble_value = peak_max
|
[
"So your roll are doing:\nIn [118]: x=np.arange(10)\nIn [119]: np.roll(x,-1)\nOut[119]: array([1, 2, 3, 4, 5, 6, 7, 8, 9, 0])\n\nYou can look at the np.roll code; it's probably more general, it has to, in one way or other, copy all the values of x to a new array. This might be a bit faster, since it doesn't try to be as general:\nIn [120]: y=np.zeros_like(x)\n ...: y[:-1] = x[1:]; y[-1] = x[0]\nIn [121]: y\nOut[121]: array([1, 2, 3, 4, 5, 6, 7, 8, 9, 0])\n\ntimes\nNope, it isn't faster:\nIn [122]: x=np.arange(100000)\n\nIn [123]: timeit np.roll(x,-1)\n82.1 µs ± 102 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)\n\nIn [124]: %%timeit \n ...: y=np.zeros_like(x)\n ...: y[:-1] = x[1:]; y[-1] = x[0]\n ...: \n ...: \n93.4 µs ± 4.84 µs per loop (mean ± std. dev. of 7 runs, 10,000 loops each)\n\nother timings:\nIn [128]: timeit y=x[1:].copy()\n52.4 µs ± 164 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)\n\nIn [129]: timeit np.concatenate((x[1:],x[0:1]))\n58.6 µs ± 289 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)\n\n"
] |
[
0
] |
[] |
[] |
[
"numpy",
"python",
"signal_processing"
] |
stackoverflow_0074655749_numpy_python_signal_processing.txt
|
Q:
include a secondary file in main go file
I have a main.go file that I worked on and now I'm trying to organize since it became a little lengthy. I want to create a new file, put some functions in it and then include it in main.go and use those functions. That new file will be in the same directory as main.go. Anybody have any idea how to do this?
A:
As long as the go files are in the same package, you do not need to import anything.
Example:
project/main.go:
package main
import "fmt"
func main() {
fmt.Println(sayHello())
}
project/utils.go:
package main
func sayHello() (string) {
return "hello!"
}
To run: go run main.go utils.go or go run *.go
A:
You don't have to do any including (importing). Just use the same package name in both files.
A:
Old question, but no matter...
You can create go.mod file at the same directory you keep your source files with the following content (ATM I use go version 1.19):
module main
go 1.19
After that you can build your project as follows:
go build .
|
include a secondary file in main go file
|
I have a main.go file that I worked on and now I'm trying to organize since it became a little lengthy. I want to create a new file, put some functions in it and then include it in main.go and use those functions. That new file will be in the same directory as main.go. Anybody have any idea how to do this?
|
[
"As long as the go files are in the same package, you do not need to import anything.\nExample:\nproject/main.go:\npackage main\n\nimport \"fmt\"\n\nfunc main() {\n fmt.Println(sayHello())\n}\n\nproject/utils.go:\npackage main\n\nfunc sayHello() (string) {\n return \"hello!\"\n}\n\nTo run: go run main.go utils.go or go run *.go\n",
"You don't have to do any including (importing). Just use the same package name in both files.\n",
"Old question, but no matter...\nYou can create go.mod file at the same directory you keep your source files with the following content (ATM I use go version 1.19):\nmodule main\n\ngo 1.19\n\nAfter that you can build your project as follows:\ngo build .\n\n"
] |
[
22,
5,
0
] |
[] |
[] |
[
"file",
"go",
"include"
] |
stackoverflow_0036904668_file_go_include.txt
|
Q:
How debounce function return the function in Javascript
I have this debounce function:
const selectElement = document.querySelector('input');
const debounce = (cb, time = 1000) => {
let timer;
return (...args) => {
console.log('run inner function')
if (timer) {
clearTimeout(timer)
}
timer = setTimeout(() => cb(...args), time)
}
}
const onChange = debounce((e) => {
console.log('event', e.target.value)
})
selectElement.addEventListener('input', onChange);
<input input={onChange}/>
The code works ok, but I want to understand, how the returned function is triggered inside debounce function, because I know that if a function returns another function I need to call it like this: debounce()() to trigger the second one, but in our case we trigger the function only once debounce() in addEventListener, but how the second call happens?
A:
Maybe it would help to first acknowledge that:
debounce()();
can be rewritten as
let onChange = debounce();
onChange();
And you've passed onChange to addEventListener here:
selectElement.addEventListener('input', onChange);
The internals of the event listener mechanism will remember the onChange you passed in and will call onChange() when the input event occurs.
Imagine an implementation of function addEventListener(type, listener) {} that arranges for listener() to happen when the event occurs.
This is done to defer the function call. The first part (debounce()) creates a function immediately (synchronously). But the call to the function that debounce() returned is deferred until the event occurs.
A:
You only would call it like debounce()() if you want to immediately call the function it returns (and throw away the entire point, since with that method you could only call it once).
It returns a function, which here you've named onChange. onChange is called when the selectElement has a input event. You don't say selectElement.addEventListener(('input', onChange()) (emphasis on onChange()) because that would just call onChange once (immediately, not even on the first input event) and attempt to run a function returned from onChange, but it doesn't
|
How debounce function return the function in Javascript
|
I have this debounce function:
const selectElement = document.querySelector('input');
const debounce = (cb, time = 1000) => {
let timer;
return (...args) => {
console.log('run inner function')
if (timer) {
clearTimeout(timer)
}
timer = setTimeout(() => cb(...args), time)
}
}
const onChange = debounce((e) => {
console.log('event', e.target.value)
})
selectElement.addEventListener('input', onChange);
<input input={onChange}/>
The code works ok, but I want to understand, how the returned function is triggered inside debounce function, because I know that if a function returns another function I need to call it like this: debounce()() to trigger the second one, but in our case we trigger the function only once debounce() in addEventListener, but how the second call happens?
|
[
"Maybe it would help to first acknowledge that:\ndebounce()();\n\ncan be rewritten as\nlet onChange = debounce();\nonChange();\n\nAnd you've passed onChange to addEventListener here:\nselectElement.addEventListener('input', onChange);\n\nThe internals of the event listener mechanism will remember the onChange you passed in and will call onChange() when the input event occurs.\nImagine an implementation of function addEventListener(type, listener) {} that arranges for listener() to happen when the event occurs.\nThis is done to defer the function call. The first part (debounce()) creates a function immediately (synchronously). But the call to the function that debounce() returned is deferred until the event occurs.\n",
"You only would call it like debounce()() if you want to immediately call the function it returns (and throw away the entire point, since with that method you could only call it once).\nIt returns a function, which here you've named onChange. onChange is called when the selectElement has a input event. You don't say selectElement.addEventListener(('input', onChange()) (emphasis on onChange()) because that would just call onChange once (immediately, not even on the first input event) and attempt to run a function returned from onChange, but it doesn't\n"
] |
[
4,
3
] |
[] |
[] |
[
"javascript"
] |
stackoverflow_0074658534_javascript.txt
|
Q:
how can i make have the specific type infered based on argument passed
I'd like to make it so that my get widget function result automatically returns the specific type based on the parameter I pass to the function. Is there a way to do this in typescript?
As of right now it gives me one of the possible types as return but not the specific one corresponding to the key provided.
example
export interface ExampleAccountCenterHTMLElement extends HTMLElement {
x: () => void;
}
export interface ExampleMiniFooterHTMLElement extends HTMLElement {
y: () => void;
}
enum ExampleHtmlElementsName {
ExampleMiniFooter = 'Example-mini-footer',
ExampleAccountCenter = 'Example-account-center',
}
interface ExampleHtmlElements {
[ExampleHtmlElementsName.ExampleAccountCenter]: ExampleAccountCenterHTMLElement,
[ExampleHtmlElementsName.ExampleMiniFooter]: ExampleMiniFooterHTMLElement,
}
export function getWidget(tagName: ExampleHtmlElementsName) {
return document.querySelector<ExampleHtmlElements[typeof tagName]>(tagName);
}
const res = getWidget(ExampleHtmlElementsName.ExampleAccountCenter)
A:
Using typeof on tagName is not a good idea. You gave tagName the explicit type ExampleHtmlElementsName, and that is excactly what typeof tagName will evaluate to. Instead, make the function generic.
export function getWidget<T extends ExampleHtmlElementsName>(tagName: T) {
return document.querySelector<ExampleHtmlElements[T]>(tagName);
}
Playground
|
how can i make have the specific type infered based on argument passed
|
I'd like to make it so that my get widget function result automatically returns the specific type based on the parameter I pass to the function. Is there a way to do this in typescript?
As of right now it gives me one of the possible types as return but not the specific one corresponding to the key provided.
example
export interface ExampleAccountCenterHTMLElement extends HTMLElement {
x: () => void;
}
export interface ExampleMiniFooterHTMLElement extends HTMLElement {
y: () => void;
}
enum ExampleHtmlElementsName {
ExampleMiniFooter = 'Example-mini-footer',
ExampleAccountCenter = 'Example-account-center',
}
interface ExampleHtmlElements {
[ExampleHtmlElementsName.ExampleAccountCenter]: ExampleAccountCenterHTMLElement,
[ExampleHtmlElementsName.ExampleMiniFooter]: ExampleMiniFooterHTMLElement,
}
export function getWidget(tagName: ExampleHtmlElementsName) {
return document.querySelector<ExampleHtmlElements[typeof tagName]>(tagName);
}
const res = getWidget(ExampleHtmlElementsName.ExampleAccountCenter)
|
[
"Using typeof on tagName is not a good idea. You gave tagName the explicit type ExampleHtmlElementsName, and that is excactly what typeof tagName will evaluate to. Instead, make the function generic.\nexport function getWidget<T extends ExampleHtmlElementsName>(tagName: T) {\n return document.querySelector<ExampleHtmlElements[T]>(tagName);\n}\n\n\nPlayground\n"
] |
[
2
] |
[] |
[] |
[
"typescript"
] |
stackoverflow_0074657754_typescript.txt
|
Q:
Sort an Array of Boolean primitive values in Java
How can I sort this array of boolean primitive, by putting false first and true at the end?
Suppose I have the following:
boolean[] arrayOfBoolean = // initializing the array
A:
You can create your own simplified version of the Counting sort algorithm by iterating over the given boolean array and accumulating the number of false values.
Then construct a new array based on the obtained count (or reassign the values in the existing one, depending on your needs).
The time complexity is O(n) - only two iterations over the given dataset (in the best case when the given array contains only false values - only one iteration required). Using built-in Timsort with Boolean wrappers would be slower. Space complexity O(n) (if generating a new array doesn't required O(1)).
public static boolean[] sortBooleans(boolean[] booleans) {
int falseCount = 0;
for (boolean next: booleans) {
if (!next) falseCount++;
}
boolean[] result = new boolean[booleans.length];
for (int i = falseCount; i < result.length; i++) {
result[i] = true;
}
return result;
}
main()
public static void main(String[] args) {
boolean[] booleans = new boolean[]{true, false, true};
System.out.println(Arrays.toString(sortBooleans(booleans)));
}
Output:
[false, true, true]
A:
There is no out-of-the-box functionality for this in the jdk. If you need to order the elements in the original array, you have to implement it yourself.
If sorting Boolean array with the same content is acceptable, you can copy the array and sort the new array.
public class Temp {
public static void main(String[] args) {
boolean[] initial = new boolean[]{true, true, false, true};
Boolean[] result = new Boolean[initial.length];
for (int i = 0; i < initial.length; i++) {
result[i] = initial[i];
}
Arrays.sort(result);
System.out.println(Arrays.toString(result));
}
}
|
Sort an Array of Boolean primitive values in Java
|
How can I sort this array of boolean primitive, by putting false first and true at the end?
Suppose I have the following:
boolean[] arrayOfBoolean = // initializing the array
|
[
"You can create your own simplified version of the Counting sort algorithm by iterating over the given boolean array and accumulating the number of false values.\nThen construct a new array based on the obtained count (or reassign the values in the existing one, depending on your needs).\nThe time complexity is O(n) - only two iterations over the given dataset (in the best case when the given array contains only false values - only one iteration required). Using built-in Timsort with Boolean wrappers would be slower. Space complexity O(n) (if generating a new array doesn't required O(1)).\npublic static boolean[] sortBooleans(boolean[] booleans) {\n \n int falseCount = 0;\n for (boolean next: booleans) {\n if (!next) falseCount++;\n }\n \n boolean[] result = new boolean[booleans.length];\n for (int i = falseCount; i < result.length; i++) {\n result[i] = true;\n }\n return result;\n}\n\nmain()\npublic static void main(String[] args) {\n boolean[] booleans = new boolean[]{true, false, true};\n System.out.println(Arrays.toString(sortBooleans(booleans)));\n}\n\nOutput:\n[false, true, true]\n\n",
"There is no out-of-the-box functionality for this in the jdk. If you need to order the elements in the original array, you have to implement it yourself.\nIf sorting Boolean array with the same content is acceptable, you can copy the array and sort the new array.\npublic class Temp {\n\n public static void main(String[] args) {\n boolean[] initial = new boolean[]{true, true, false, true};\n Boolean[] result = new Boolean[initial.length];\n for (int i = 0; i < initial.length; i++) {\n result[i] = initial[i];\n }\n Arrays.sort(result);\n System.out.println(Arrays.toString(result));\n }\n}\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"algorithm",
"arrays",
"boolean",
"java",
"sorting"
] |
stackoverflow_0074658458_algorithm_arrays_boolean_java_sorting.txt
|
Q:
What does it means if TNS_Names Directory is Empty?
What does it means if the TNS_Names directory is empty in Preferences GUI of SQL Developer?
A:
It just means that there is no tnsnames.ora file for SQL Developer to use, so you can't use TNS entries when defining connections.
It won't stop you defining other connection types - 'Basic', 'Custom JDBC' etc. - you just won't be able to use 'TNS' as there won't be any options to pick from in the 'Network alias' drop-down; and if you're previously picked one and then removed the tnsnames.ora then it won't be recognised any more.
|
What does it means if TNS_Names Directory is Empty?
|
What does it means if the TNS_Names directory is empty in Preferences GUI of SQL Developer?
|
[
"It just means that there is no tnsnames.ora file for SQL Developer to use, so you can't use TNS entries when defining connections.\nIt won't stop you defining other connection types - 'Basic', 'Custom JDBC' etc. - you just won't be able to use 'TNS' as there won't be any options to pick from in the 'Network alias' drop-down; and if you're previously picked one and then removed the tnsnames.ora then it won't be recognised any more.\n"
] |
[
1
] |
[] |
[] |
[
"oracle",
"oracle_sqldeveloper"
] |
stackoverflow_0074658662_oracle_oracle_sqldeveloper.txt
|
Q:
Druid - longSum metrics is not populating
I am doing batch ingestion in druid, by using the wikiticker-index.json file which comes with the druid quickstart.
Following is my data schema in wikiticker-index.json file.
{
type:"index_hadoop",
spec:{
ioConfig:{
type:"hadoop",
inputSpec:{
type:"static",
paths:"quickstart/wikiticker-2015-09-12-sampled.json"
}
},
dataSchema:{
dataSource:"wikiticker",
granularitySpec:{
type:"uniform",
segmentGranularity:"day",
queryGranularity:"none",
intervals:[
"2015-09-12/2015-09-13"
]
},
parser:{
type:"hadoopyString",
parseSpec:{
format:"json",
dimensionsSpec:{
dimensions:[
"channel",
"cityName",
"comment",
"countryIsoCode",
"countryName",
"isAnonymous",
"isMinor",
"isNew",
"isRobot",
"isUnpatrolled",
"metroCode",
"namespace",
"page",
"regionIsoCode",
"regionName",
"user"
]
},
timestampSpec:{
format:"auto",
column:"time"
}
}
},
metricsSpec:[
{
name:"count",
type:"count"
},
{
name:"added",
type:"longSum",
fieldName:"added"
},
{
name:"deleted",
type:"longSum",
fieldName:"deleted"
},
{
name:"delta",
type:"longSum",
fieldName:"delta"
},
{
name:"user_unique",
type:"hyperUnique",
fieldName:"user"
}
]
},
tuningConfig:{
type:"hadoop",
partitionsSpec:{
type:"hashed",
targetPartitionSize:5000000
},
jobProperties:{
}
}
}
}
After ingesting the sample json. only the following metrics show up.
I am unable to find the longSum metrics.i.e added, deleted and delta.
Any particular reason?
Does anybody know about this?
A:
OP confirmed this comment from Slim Bougerra worked:
You need to add yourself on the Superset UI. Superset doesn't populate the metrics automatically.
|
Druid - longSum metrics is not populating
|
I am doing batch ingestion in druid, by using the wikiticker-index.json file which comes with the druid quickstart.
Following is my data schema in wikiticker-index.json file.
{
type:"index_hadoop",
spec:{
ioConfig:{
type:"hadoop",
inputSpec:{
type:"static",
paths:"quickstart/wikiticker-2015-09-12-sampled.json"
}
},
dataSchema:{
dataSource:"wikiticker",
granularitySpec:{
type:"uniform",
segmentGranularity:"day",
queryGranularity:"none",
intervals:[
"2015-09-12/2015-09-13"
]
},
parser:{
type:"hadoopyString",
parseSpec:{
format:"json",
dimensionsSpec:{
dimensions:[
"channel",
"cityName",
"comment",
"countryIsoCode",
"countryName",
"isAnonymous",
"isMinor",
"isNew",
"isRobot",
"isUnpatrolled",
"metroCode",
"namespace",
"page",
"regionIsoCode",
"regionName",
"user"
]
},
timestampSpec:{
format:"auto",
column:"time"
}
}
},
metricsSpec:[
{
name:"count",
type:"count"
},
{
name:"added",
type:"longSum",
fieldName:"added"
},
{
name:"deleted",
type:"longSum",
fieldName:"deleted"
},
{
name:"delta",
type:"longSum",
fieldName:"delta"
},
{
name:"user_unique",
type:"hyperUnique",
fieldName:"user"
}
]
},
tuningConfig:{
type:"hadoop",
partitionsSpec:{
type:"hashed",
targetPartitionSize:5000000
},
jobProperties:{
}
}
}
}
After ingesting the sample json. only the following metrics show up.
I am unable to find the longSum metrics.i.e added, deleted and delta.
Any particular reason?
Does anybody know about this?
|
[
"OP confirmed this comment from Slim Bougerra worked:\n\nYou need to add yourself on the Superset UI. Superset doesn't populate the metrics automatically.\n\n"
] |
[
0
] |
[] |
[] |
[
"apache_superset",
"druid",
"hadoop",
"java"
] |
stackoverflow_0045007683_apache_superset_druid_hadoop_java.txt
|
Q:
Microsoft Graph API Create Group
When creating a new group that is NOT security enabled is it possible to create the group with mailEnabled set too false?
When creating the group the API seems to ignore the mailEnabled field in the request and always returns true. Also when attempting to patch the group with mailEnabled=false the api returns 204 but the mailEnabled never changes from true too false.
Is this a bug or is it simply not possible to have a mailEnabled=false group even though there is an option flag.
A:
Although this may not be a direct answer, I would try creating the group as mail-enabled:false via the GUI. If it allows you to do this then you can confirm that the API is not working as expected.
Have a look here as well:
Mail-enable
From reading this, it seems like the mailEnabled attribute cannot receive a PATCH via the Graph API
|
Microsoft Graph API Create Group
|
When creating a new group that is NOT security enabled is it possible to create the group with mailEnabled set too false?
When creating the group the API seems to ignore the mailEnabled field in the request and always returns true. Also when attempting to patch the group with mailEnabled=false the api returns 204 but the mailEnabled never changes from true too false.
Is this a bug or is it simply not possible to have a mailEnabled=false group even though there is an option flag.
|
[
"Although this may not be a direct answer, I would try creating the group as mail-enabled:false via the GUI. If it allows you to do this then you can confirm that the API is not working as expected.\nHave a look here as well:\nMail-enable\nFrom reading this, it seems like the mailEnabled attribute cannot receive a PATCH via the Graph API\n"
] |
[
0
] |
[] |
[] |
[
"msgraph"
] |
stackoverflow_0073673395_msgraph.txt
|
Q:
Warning message "Function components cannot be given refs" when using an own child coponent with Dropzone
When creating an own child function component and use it within <Dropzone>...</Dropzone>, I see the following warning message, inside the console:
react-dom.development.js:67 Warning: Function components cannot be given refs. Attempts to access this ref will fail. Did you mean to use React.forwardRef()?
Check the render method of `Dropzone`.
at MyContainer (http://localhost:3000/static/js/bundle.js:83:12)
at http://localhost:3000/static/js/bundle.js:35421:23
at header
at div
at App
My code looks like that:
import React from 'react';
import logo from './logo.svg';
import './App.css';
import Dropzone from 'react-dropzone';
export const MyContainer = ({...other}) => (
<div {...other}/>
);
export function App() {
return (
<div className="App">
<header className="App-header">
<img src={logo} className="App-logo" alt="logo" />
<p data-testid="projectNameInput">
Hello World
</p>
<Dropzone onDrop={acceptedFiles => console.log(acceptedFiles)}>
{({getRootProps, getInputProps}) => (
<MyContainer {...getRootProps()}>
<section>
<div>
<input {...getInputProps()} />
<p>Drag 'n' drop some files here, or click to select files</p>
</div>
</section>
</MyContainer>
)}
</Dropzone>
</header>
</div>
);
}
export default App;
Everything seems to work as expected with the Dropzone and the message disappears, when I don't use {...getRootProps()} as attribute-list of MyContainer.
Of course I have checked the react documentation about forwardRef (https://reactjs.org/docs/forwarding-refs.html) and searched for an explanation of this issue. However all explanations do not really fit to my case, because I don't use any reference here at all. At least I don't see the usage of a reference.
Is this an issue of react-dropzone or do I use the dropzone wrongly?
A:
I ran into the same warning a while back and found that when you call getRootProps it returns several different props, one of them is ref:
const { ref, ...rest } = getRootProps();
So you are actually using ref even though it is not obvious. If you want to use Dropzone on your custom component (MyContainer in your case), you have to add Ref forwarding on MyContainer:
const MyContainer = React.forwardRef(ref, {...other}) => (
<div ref={ref} {...other}/>
);
OR
You can extract the ref value and pass it as a custom prop:
// MyContainer
export const MyContainer = ({ innerRef, ...other}) => ( // make component accept prop innerRef which will be passed to the div's ref prop
<div ref={innerRef} {...other}/>
);
// App (omitted code before and after)
<Dropzone onDrop={acceptedFiles => console.log(acceptedFiles)}>
{({ getRootProps, getInputProps }) => {
const { ref, ...rootProps } = getRootProps(); // extract ref
return (
<MyContainer innerRef={ref} {...rootProps}> // pass ref via custom prop innerRef
<section>
<div>
<input {...getInputProps()} />
<p>Drag 'n' drop some files here, or click to select files</p>
</div>
</section>
</MyContainer>
);
}}
</Dropzone>
Hope it helps!
|
Warning message "Function components cannot be given refs" when using an own child coponent with Dropzone
|
When creating an own child function component and use it within <Dropzone>...</Dropzone>, I see the following warning message, inside the console:
react-dom.development.js:67 Warning: Function components cannot be given refs. Attempts to access this ref will fail. Did you mean to use React.forwardRef()?
Check the render method of `Dropzone`.
at MyContainer (http://localhost:3000/static/js/bundle.js:83:12)
at http://localhost:3000/static/js/bundle.js:35421:23
at header
at div
at App
My code looks like that:
import React from 'react';
import logo from './logo.svg';
import './App.css';
import Dropzone from 'react-dropzone';
export const MyContainer = ({...other}) => (
<div {...other}/>
);
export function App() {
return (
<div className="App">
<header className="App-header">
<img src={logo} className="App-logo" alt="logo" />
<p data-testid="projectNameInput">
Hello World
</p>
<Dropzone onDrop={acceptedFiles => console.log(acceptedFiles)}>
{({getRootProps, getInputProps}) => (
<MyContainer {...getRootProps()}>
<section>
<div>
<input {...getInputProps()} />
<p>Drag 'n' drop some files here, or click to select files</p>
</div>
</section>
</MyContainer>
)}
</Dropzone>
</header>
</div>
);
}
export default App;
Everything seems to work as expected with the Dropzone and the message disappears, when I don't use {...getRootProps()} as attribute-list of MyContainer.
Of course I have checked the react documentation about forwardRef (https://reactjs.org/docs/forwarding-refs.html) and searched for an explanation of this issue. However all explanations do not really fit to my case, because I don't use any reference here at all. At least I don't see the usage of a reference.
Is this an issue of react-dropzone or do I use the dropzone wrongly?
|
[
"I ran into the same warning a while back and found that when you call getRootProps it returns several different props, one of them is ref:\nconst { ref, ...rest } = getRootProps();\n\nSo you are actually using ref even though it is not obvious. If you want to use Dropzone on your custom component (MyContainer in your case), you have to add Ref forwarding on MyContainer:\nconst MyContainer = React.forwardRef(ref, {...other}) => (\n <div ref={ref} {...other}/>\n);\n\nOR\nYou can extract the ref value and pass it as a custom prop:\n// MyContainer\nexport const MyContainer = ({ innerRef, ...other}) => ( // make component accept prop innerRef which will be passed to the div's ref prop\n <div ref={innerRef} {...other}/>\n);\n\n\n// App (omitted code before and after)\n<Dropzone onDrop={acceptedFiles => console.log(acceptedFiles)}>\n {({ getRootProps, getInputProps }) => {\n const { ref, ...rootProps } = getRootProps(); // extract ref\n return (\n <MyContainer innerRef={ref} {...rootProps}> // pass ref via custom prop innerRef\n <section>\n <div>\n <input {...getInputProps()} />\n <p>Drag 'n' drop some files here, or click to select files</p>\n </div>\n </section>\n </MyContainer>\n );\n }}\n</Dropzone>\n\nHope it helps!\n"
] |
[
0
] |
[] |
[] |
[
"react_dropzone",
"reactjs"
] |
stackoverflow_0070943211_react_dropzone_reactjs.txt
|
Q:
How to give forward declaration of a object type in class B if it declared in class A by make use of "using" keyword
Following is my original code
class m_VertexProps;
class m_EdgeProps;
class m_GraphProps;
using m_graph = boost::adjacency_list<boost::vecS, boost::vecS,
boost::bidirectionalS,// Vertex Properties...
m_VertexProps,
// Edge Propereties...
m_EdgeProps,
// Graph Properties
m_GraphProps>;
class A{
public:
A();
m_graph g;
};
Now I want to give forward declaration of m_graph in class B. I don't want to include class A in class B's header file. I will include it in .cpp file.
How can I give forward declaration of m_graph class;
I tried in following way but didn't work.
class m_graph;
class B{
public:
B();
};
A:
The goal as posed in the title is impossible: you cannot use using to cause a forward declaration¹.
However, what I think you are running into is a frequent problem when declaring BGL graphs with property bundles, when you need the graph traits inside the property bundles.
The trick is to use traits on the "base graph" inside your properties.
E.g.:
using BaseGraph = boost::adjacency_list<boost::vecS, boost::vecS, boost::bidirectionalS>;
using Traits = boost::graph_traits<BaseGraph>;
using VD = Traits::vertex_descriptor;
struct VertexProperties {
std::string metadata;
VD predecessor; // no forward declaration required!
};
struct EdgeProperties {
double weight;
};
using Graph = boost::adjacency_list<boost::vecS, boost::vecS, boost::bidirectionalS,
VertexProperties, EdgeProperties>;
Compilation Firewalling
If you really only want the properties to hide implementation details, just use the Pimpl Idiom like you would anywhere else.
Note it implies that you cannot have template interface in your public header, unless it doesn't depend on the complete types of the implmentation.
Bonus Technique
As a bonus, if you want to completely forward declare the graph and only use them by reference or as return value in the header, you can use this trick:
struct Graph; // forward declare
And then at the implementation site:
struct Graph : boost::adjacency_list</*...*/> {
using base_type = boost::adjacency_list</*...*/;
using base_type::base_type, base_type::operator=;
};
For things to work well, you might need to implement delegating friends, and perhaps delegating traits:
template <> struct boost::graph_traits<Graph> :
boost::graph_traits<Graph::base_type> {};
¹ ignoring useless tricks like using _dummy = std::hash<struct Y>; - which does technically forward declare Y, but it adds no value over just struct Y;
|
How to give forward declaration of a object type in class B if it declared in class A by make use of "using" keyword
|
Following is my original code
class m_VertexProps;
class m_EdgeProps;
class m_GraphProps;
using m_graph = boost::adjacency_list<boost::vecS, boost::vecS,
boost::bidirectionalS,// Vertex Properties...
m_VertexProps,
// Edge Propereties...
m_EdgeProps,
// Graph Properties
m_GraphProps>;
class A{
public:
A();
m_graph g;
};
Now I want to give forward declaration of m_graph in class B. I don't want to include class A in class B's header file. I will include it in .cpp file.
How can I give forward declaration of m_graph class;
I tried in following way but didn't work.
class m_graph;
class B{
public:
B();
};
|
[
"The goal as posed in the title is impossible: you cannot use using to cause a forward declaration¹.\nHowever, what I think you are running into is a frequent problem when declaring BGL graphs with property bundles, when you need the graph traits inside the property bundles.\nThe trick is to use traits on the \"base graph\" inside your properties.\nE.g.:\nusing BaseGraph = boost::adjacency_list<boost::vecS, boost::vecS, boost::bidirectionalS>;\nusing Traits = boost::graph_traits<BaseGraph>;\nusing VD = Traits::vertex_descriptor;\n\nstruct VertexProperties {\n std::string metadata;\n VD predecessor; // no forward declaration required!\n};\n\nstruct EdgeProperties {\n double weight;\n};\n\nusing Graph = boost::adjacency_list<boost::vecS, boost::vecS, boost::bidirectionalS,\n VertexProperties, EdgeProperties>;\n\nCompilation Firewalling\nIf you really only want the properties to hide implementation details, just use the Pimpl Idiom like you would anywhere else.\n\nNote it implies that you cannot have template interface in your public header, unless it doesn't depend on the complete types of the implmentation.\n\nBonus Technique\nAs a bonus, if you want to completely forward declare the graph and only use them by reference or as return value in the header, you can use this trick:\nstruct Graph; // forward declare\n\nAnd then at the implementation site:\nstruct Graph : boost::adjacency_list</*...*/> {\n using base_type = boost::adjacency_list</*...*/;\n using base_type::base_type, base_type::operator=;\n};\n\nFor things to work well, you might need to implement delegating friends, and perhaps delegating traits:\ntemplate <> struct boost::graph_traits<Graph> :\n boost::graph_traits<Graph::base_type> {};\n\n\n¹ ignoring useless tricks like using _dummy = std::hash<struct Y>; - which does technically forward declare Y, but it adds no value over just struct Y;\n"
] |
[
1
] |
[] |
[] |
[
"boost",
"c++"
] |
stackoverflow_0074657221_boost_c++.txt
|
Q:
Groovy program- convert string to xml
convert the string to xml format, could you please correct the below logic.
class Demo {
static void main(String[] args) {
String keyList = """name=raj role=IT"""
def splitList = keyList.split("\n")
for (String item : splitList) {
// println(item)
def splitData = item.split("=")
for (String value : splitData)
{
println(value)
println("<"+value.getAt(0)+">"+value.getAt(1)+"/<"+value.getAt(0)+">")
}
}
}
}
output:
<name>raj</name>
<role>IT</name>
Thanks in advance!
A:
Creating XML like that is fraught with problems, and you will 100% generate invalid XML even if you get it working...
Better to use the classes provided to generate valid XML, for example this:
import groovy.xml.MarkupBuilder
String keyList = """name=raj role=IT
name=tim role=Solutions"""
def sw = new StringWriter()
new MarkupBuilder(sw).root {
keyList.eachLine {line ->
person {
line.split("\\s").each {
def (key, value) = it.split("=", 2)
"$key"(value)
}
}
}
}
println(sw.toString())
Which prints:
<root>
<person>
<name>raj</name>
<role>IT</role>
</person>
<person>
<name>tim</name>
<role>Solutions</role>
</person>
</root>
|
Groovy program- convert string to xml
|
convert the string to xml format, could you please correct the below logic.
class Demo {
static void main(String[] args) {
String keyList = """name=raj role=IT"""
def splitList = keyList.split("\n")
for (String item : splitList) {
// println(item)
def splitData = item.split("=")
for (String value : splitData)
{
println(value)
println("<"+value.getAt(0)+">"+value.getAt(1)+"/<"+value.getAt(0)+">")
}
}
}
}
output:
<name>raj</name>
<role>IT</name>
Thanks in advance!
|
[
"Creating XML like that is fraught with problems, and you will 100% generate invalid XML even if you get it working...\nBetter to use the classes provided to generate valid XML, for example this:\nimport groovy.xml.MarkupBuilder\n\nString keyList = \"\"\"name=raj role=IT\nname=tim role=Solutions\"\"\"\ndef sw = new StringWriter()\nnew MarkupBuilder(sw).root {\n keyList.eachLine {line ->\n person {\n line.split(\"\\\\s\").each {\n def (key, value) = it.split(\"=\", 2)\n \"$key\"(value)\n }\n }\n }\n}\nprintln(sw.toString())\n\nWhich prints:\n<root>\n <person>\n <name>raj</name>\n <role>IT</role>\n </person>\n <person>\n <name>tim</name>\n <role>Solutions</role>\n </person>\n</root>\n\n"
] |
[
0
] |
[] |
[] |
[
"groovy"
] |
stackoverflow_0074657460_groovy.txt
|
Q:
NestJS Mongoose Schema Inheritence
I am attempting to inherit Mongoose Schemas or SchemaDefitions within NestJS but I am not having much luck.
I am doing this so I can share Base and Common Schema Definition Details such as a virtual('id') and a nonce, we have attached to each of the entities. Each schema definition should have its own collection in Mongo, so discriminators will not work.
I tried to implement this in the following different ways
First, I have the following Base Schema Definition defined:
base.schema.ts
import { Prop, Schema, SchemaFactory } from '@nestjs/mongoose';
import { Document, Types } from 'mongoose';
import { TimeStamps } from './timestamps.schema';
export type BaseDocument = BaseSchemaDefinition & Document;
@Schema({
toJSON: {
virtuals: true,
transform: function (doc: any, ret: any) {
delete ret._id;
delete ret.__v;
return ret;
},
},
})
export class BaseSchemaDefinition {
@Prop({
type: Types.ObjectId,
required: true,
default: Types.ObjectId,
})
nonce: Types.ObjectId;
@Prop()
timestamps: TimeStamps;
}
I then inherit the schema definition and create the schema so it can be used later in my services and controllers by the following:
person.schema.ts
import { Prop, SchemaFactory } from '@nestjs/mongoose';
import * as mongoose from 'mongoose';
import { Document } from 'mongoose';
import { Address } from './address.schema';
import { BaseSchemaDefinition } from './base.schema';
export type PersonDocument = PersonSchemaDefintion & Document;
export class PersonSchemaDefintion extends BaseSchemaDefinition {
@Prop({ required: true })
first_name: string;
@Prop({ required: true })
last_name: string;
@Prop()
middle_name: string;
@Prop()
data_of_birth: Date;
@Prop({ type: [{ type: mongoose.Schema.Types.ObjectId, ref: 'Address' }] })
addresses: [Address];
}
const PersonSchema = SchemaFactory.createForClass(PersonSchemaDefintion);
PersonSchema.virtual('id').get(function (this: PersonDocument) {
return this._id;
});
export { PersonSchema };
This results in only allowing me to create and get properties defined in the BaseSchemaDefinition.
{
"timestamps": {
"deleted": null,
"updated": "2021-09-21T16:55:17.094Z",
"created": "2021-09-21T16:55:17.094Z"
},
"_id": "614a0e75eb6cb52aa0ccd026",
"nonce": "614a0e75eb6cb52aa0ccd028",
"__v": 0 }
Second, I then tried to implement inheritance by using the method described here
Inheriting Mongoose schemas (different MongoDB collections)
base.schema.ts
import { Prop, Schema, SchemaFactory } from '@nestjs/mongoose';
import { Document, Types } from 'mongoose';
import { TimeStamps } from './timestamps.schema';
export type BaseDocument = BaseSchemaDefinition & Document;
@Schema({
toJSON: {
virtuals: true,
transform: function (doc: any, ret: any) {
delete ret._id;
delete ret.__v;
return ret;
},
},
})
export class BaseSchemaDefinition {
@Prop({
type: Types.ObjectId,
required: true,
default: Types.ObjectId,
})
nonce: Types.ObjectId;
@Prop()
timestamps: TimeStamps;
}
const BaseSchema = SchemaFactory.createForClass(BaseSchemaDefinition);
BaseSchema.virtual('id').get(function (this: BaseDocument) {
return this._id;
});
export { BaseSchema };
person.schema.ts
import { Prop } from '@nestjs/mongoose';
import * as mongoose from 'mongoose';
import { Document } from 'mongoose';
import { Address } from './address.schema';
import { BaseSchema, BaseSchemaDefinition } from './base.schema';
export type PersonDocument = PersonSchemaDefintion & Document;
export class PersonSchemaDefintion extends BaseSchemaDefinition {
@Prop({ required: true })
first_name: string;
@Prop({ required: true })
last_name: string;
@Prop()
middle_name: string;
@Prop()
data_of_birth: Date;
@Prop({ type: [{ type: mongoose.Schema.Types.ObjectId, ref: 'Address' }] })
addresses: [Address];
}
export const PersonSchema = Object.assign(
{},
BaseSchema.obj,
PersonSchemaDefintion,
);
Results in the same output. Not sure why the inheritance is not taking
The following is the service code that uses the schemas and builds the models
person.service.ts
import { Model } from 'mongoose';
import { Injectable } from '@nestjs/common';
import { InjectModel } from '@nestjs/mongoose';
import {
PersonSchemaDefintion,
PersonDocument,
} from 'src/schemas/person.schema';
import { TimeStamps } from 'src/schemas/timestamps.schema';
@Injectable()
export class PersonService {
constructor(
@InjectModel(PersonSchemaDefintion.name)
private personModel: Model<PersonDocument>,
) {}
async create(
personModel: PersonSchemaDefintion,
): Promise<PersonSchemaDefintion> {
personModel.timestamps = new TimeStamps();
const createdPerson = new this.personModel(personModel);
return createdPerson.save();
}
async update(
id: string,
changes: Partial<PersonSchemaDefintion>,
): Promise<PersonSchemaDefintion> {
const existingPerson = this.personModel
.findByIdAndUpdate(id, changes)
.exec()
.then(() => {
return this.personModel.findById(id);
});
if (!existingPerson) {
throw Error('Id does not exist');
}
return existingPerson;
}
async findAll(): Promise<PersonSchemaDefintion[]> {
return this.personModel.find().exec();
}
async findOne(id: string): Promise<PersonSchemaDefintion> {
return this.personModel.findById(id).exec();
}
async delete(id: string): Promise<string> {
return this.personModel.deleteOne({ _id: id }).then(() => {
return Promise.resolve(`${id} has been deleted`);
});
}
}
I can provide additional details if it is needed
A:
I think I have same issue.
This is my solution:
First, you need custom @Schema decorator.
schema.decorator.ts
import * as mongoose from 'mongoose';
import { TypeMetadataStorage } from '@nestjs/mongoose/dist/storages/type-metadata.storage';
import * as _ from 'lodash';
export type SchemaOptions = mongoose.SchemaOptions & {
inheritOption?: boolean
}
function mergeOptions(parentOptions: SchemaOptions, childOptions: SchemaOptions) {
for (const key in childOptions) {
if (Object.prototype.hasOwnProperty.call(childOptions, key)) {
parentOptions[key] = childOptions[key];
}
}
return parentOptions;
}
export function Schema(options?: SchemaOptions): ClassDecorator {
return (target: Function) => {
const isInheritOptions = options.inheritOption;
if (isInheritOptions) {
let parentOptions = TypeMetadataStorage.getSchemaMetadataByTarget((target as any).__proto__).options;
parentOptions = _.cloneDeep(parentOptions)
options = mergeOptions(parentOptions, options);
}
TypeMetadataStorage.addSchemaMetadata({
target,
options
})
}
}
This is base schema.
cat.schema.ts
import { Prop, SchemaFactory } from "@nestjs/mongoose";
import { Schema } from '../../common/decorators/schema.decorator'
import { Document } from "mongoose";
export type CatDocument = Cat & Document;
@Schema({
timestamps: true,
toJSON: {
virtuals: true,
transform: function (doc: any, ret: any) {
delete ret._id;
delete ret.__v;
return ret;
},
},
})
export class Cat {
@Prop()
name: string;
@Prop()
age: number;
@Prop()
breed: string;
}
const CatSchema = SchemaFactory.createForClass(Cat);
CatSchema.virtual("id").get(function (this: CatDocument) {
return this._id;
});
export { CatSchema };
england-cat.schema.ts
import { Prop, SchemaFactory } from "@nestjs/mongoose";
import { Schema } from "../../common/decorators/schema.decorator";
import { Document } from "mongoose";
import { Cat } from "../../cats/schemas/cat.schema";
export type EnglandCatDocument = EnglandCat & Document;
@Schema({
inheritOption: true
})
export class EnglandCat extends Cat {
@Prop()
numberLegs: number;
}
export const EnglandCatSchema = SchemaFactory.createForClass(EnglandCat)
EnglandCat is subclass of Cat and it inherits all options from Cat, you can overwrite some options if you want.
A:
After fiddling around with it for a while I found the right combination that appears to work when leveraging these technologies
Here is the base class
base.schema.ts
import { Prop, Schema } from '@nestjs/mongoose';
import { Document, Types } from 'mongoose';
import { TimeStamps } from './timestamps.schema';
export type BaseDocument = Base & Document;
@Schema()
export class Base {
@Prop({
type: Types.ObjectId,
required: true,
default: Types.ObjectId,
})
nonce: Types.ObjectId;
@Prop()
timestamps: TimeStamps;
}
Here is the class that inherits the base.schema
person.schema.ts
import { Prop, Schema, SchemaFactory } from '@nestjs/mongoose';
import { Document, Types } from 'mongoose';
import { Address } from './address.schema';
import { Base } from './base.schema';
export type PersonDocument = Person & Document;
@Schema({
toJSON: {
virtuals: true,
transform: function (doc: any, ret: any) {
delete ret._id;
delete ret.__v;
return ret;
},
},
})
export class Person extends Base {
@Prop({ required: true })
first_name: string;
@Prop({ required: true })
last_name: string;
@Prop()
middle_name: string;
@Prop()
data_of_birth: Date;
@Prop({ type: [{ type: Types.ObjectId, ref: 'Address' }] })
addresses: [Address];
}
const PersonSchema = SchemaFactory.createForClass(Person);
PersonSchema.virtual('id').get(function (this: PersonDocument) {
return this._id;
});
export { PersonSchema };
The only thing I would like to improve on is moving the virtual('id') to the base class. However the schema inheritance does not work. At this point, it will only work with the Schema Definition. This at least gets me in the right direction. If anyone has a way to improve on this please contribute.
A:
those Hieu Cao answer's is correct because the question is "Schema Inheritance". Your checked answer doesn't related about extending schema, it's about basic inherit class, you doesn't have any schema options in Base.
|
NestJS Mongoose Schema Inheritence
|
I am attempting to inherit Mongoose Schemas or SchemaDefitions within NestJS but I am not having much luck.
I am doing this so I can share Base and Common Schema Definition Details such as a virtual('id') and a nonce, we have attached to each of the entities. Each schema definition should have its own collection in Mongo, so discriminators will not work.
I tried to implement this in the following different ways
First, I have the following Base Schema Definition defined:
base.schema.ts
import { Prop, Schema, SchemaFactory } from '@nestjs/mongoose';
import { Document, Types } from 'mongoose';
import { TimeStamps } from './timestamps.schema';
export type BaseDocument = BaseSchemaDefinition & Document;
@Schema({
toJSON: {
virtuals: true,
transform: function (doc: any, ret: any) {
delete ret._id;
delete ret.__v;
return ret;
},
},
})
export class BaseSchemaDefinition {
@Prop({
type: Types.ObjectId,
required: true,
default: Types.ObjectId,
})
nonce: Types.ObjectId;
@Prop()
timestamps: TimeStamps;
}
I then inherit the schema definition and create the schema so it can be used later in my services and controllers by the following:
person.schema.ts
import { Prop, SchemaFactory } from '@nestjs/mongoose';
import * as mongoose from 'mongoose';
import { Document } from 'mongoose';
import { Address } from './address.schema';
import { BaseSchemaDefinition } from './base.schema';
export type PersonDocument = PersonSchemaDefintion & Document;
export class PersonSchemaDefintion extends BaseSchemaDefinition {
@Prop({ required: true })
first_name: string;
@Prop({ required: true })
last_name: string;
@Prop()
middle_name: string;
@Prop()
data_of_birth: Date;
@Prop({ type: [{ type: mongoose.Schema.Types.ObjectId, ref: 'Address' }] })
addresses: [Address];
}
const PersonSchema = SchemaFactory.createForClass(PersonSchemaDefintion);
PersonSchema.virtual('id').get(function (this: PersonDocument) {
return this._id;
});
export { PersonSchema };
This results in only allowing me to create and get properties defined in the BaseSchemaDefinition.
{
"timestamps": {
"deleted": null,
"updated": "2021-09-21T16:55:17.094Z",
"created": "2021-09-21T16:55:17.094Z"
},
"_id": "614a0e75eb6cb52aa0ccd026",
"nonce": "614a0e75eb6cb52aa0ccd028",
"__v": 0 }
Second, I then tried to implement inheritance by using the method described here
Inheriting Mongoose schemas (different MongoDB collections)
base.schema.ts
import { Prop, Schema, SchemaFactory } from '@nestjs/mongoose';
import { Document, Types } from 'mongoose';
import { TimeStamps } from './timestamps.schema';
export type BaseDocument = BaseSchemaDefinition & Document;
@Schema({
toJSON: {
virtuals: true,
transform: function (doc: any, ret: any) {
delete ret._id;
delete ret.__v;
return ret;
},
},
})
export class BaseSchemaDefinition {
@Prop({
type: Types.ObjectId,
required: true,
default: Types.ObjectId,
})
nonce: Types.ObjectId;
@Prop()
timestamps: TimeStamps;
}
const BaseSchema = SchemaFactory.createForClass(BaseSchemaDefinition);
BaseSchema.virtual('id').get(function (this: BaseDocument) {
return this._id;
});
export { BaseSchema };
person.schema.ts
import { Prop } from '@nestjs/mongoose';
import * as mongoose from 'mongoose';
import { Document } from 'mongoose';
import { Address } from './address.schema';
import { BaseSchema, BaseSchemaDefinition } from './base.schema';
export type PersonDocument = PersonSchemaDefintion & Document;
export class PersonSchemaDefintion extends BaseSchemaDefinition {
@Prop({ required: true })
first_name: string;
@Prop({ required: true })
last_name: string;
@Prop()
middle_name: string;
@Prop()
data_of_birth: Date;
@Prop({ type: [{ type: mongoose.Schema.Types.ObjectId, ref: 'Address' }] })
addresses: [Address];
}
export const PersonSchema = Object.assign(
{},
BaseSchema.obj,
PersonSchemaDefintion,
);
Results in the same output. Not sure why the inheritance is not taking
The following is the service code that uses the schemas and builds the models
person.service.ts
import { Model } from 'mongoose';
import { Injectable } from '@nestjs/common';
import { InjectModel } from '@nestjs/mongoose';
import {
PersonSchemaDefintion,
PersonDocument,
} from 'src/schemas/person.schema';
import { TimeStamps } from 'src/schemas/timestamps.schema';
@Injectable()
export class PersonService {
constructor(
@InjectModel(PersonSchemaDefintion.name)
private personModel: Model<PersonDocument>,
) {}
async create(
personModel: PersonSchemaDefintion,
): Promise<PersonSchemaDefintion> {
personModel.timestamps = new TimeStamps();
const createdPerson = new this.personModel(personModel);
return createdPerson.save();
}
async update(
id: string,
changes: Partial<PersonSchemaDefintion>,
): Promise<PersonSchemaDefintion> {
const existingPerson = this.personModel
.findByIdAndUpdate(id, changes)
.exec()
.then(() => {
return this.personModel.findById(id);
});
if (!existingPerson) {
throw Error('Id does not exist');
}
return existingPerson;
}
async findAll(): Promise<PersonSchemaDefintion[]> {
return this.personModel.find().exec();
}
async findOne(id: string): Promise<PersonSchemaDefintion> {
return this.personModel.findById(id).exec();
}
async delete(id: string): Promise<string> {
return this.personModel.deleteOne({ _id: id }).then(() => {
return Promise.resolve(`${id} has been deleted`);
});
}
}
I can provide additional details if it is needed
|
[
"I think I have same issue.\nThis is my solution:\nFirst, you need custom @Schema decorator.\nschema.decorator.ts\nimport * as mongoose from 'mongoose';\nimport { TypeMetadataStorage } from '@nestjs/mongoose/dist/storages/type-metadata.storage';\nimport * as _ from 'lodash';\n\nexport type SchemaOptions = mongoose.SchemaOptions & {\n inheritOption?: boolean\n}\n\nfunction mergeOptions(parentOptions: SchemaOptions, childOptions: SchemaOptions) {\n for (const key in childOptions) {\n if (Object.prototype.hasOwnProperty.call(childOptions, key)) {\n parentOptions[key] = childOptions[key];\n }\n }\n return parentOptions;\n}\n\nexport function Schema(options?: SchemaOptions): ClassDecorator {\n return (target: Function) => {\n const isInheritOptions = options.inheritOption;\n\n if (isInheritOptions) {\n let parentOptions = TypeMetadataStorage.getSchemaMetadataByTarget((target as any).__proto__).options;\n parentOptions = _.cloneDeep(parentOptions) \n options = mergeOptions(parentOptions, options);\n }\n\n TypeMetadataStorage.addSchemaMetadata({\n target,\n options\n })\n }\n}\n\nThis is base schema.\ncat.schema.ts\nimport { Prop, SchemaFactory } from \"@nestjs/mongoose\";\nimport { Schema } from '../../common/decorators/schema.decorator'\nimport { Document } from \"mongoose\";\n\nexport type CatDocument = Cat & Document;\n\n@Schema({\n timestamps: true,\n toJSON: {\n virtuals: true,\n transform: function (doc: any, ret: any) {\n delete ret._id;\n delete ret.__v;\n return ret;\n },\n },\n})\nexport class Cat {\n @Prop()\n name: string;\n\n @Prop()\n age: number;\n\n @Prop()\n breed: string;\n}\n\nconst CatSchema = SchemaFactory.createForClass(Cat);\n\nCatSchema.virtual(\"id\").get(function (this: CatDocument) {\n return this._id;\n});\n\nexport { CatSchema };\n\nengland-cat.schema.ts\nimport { Prop, SchemaFactory } from \"@nestjs/mongoose\";\nimport { Schema } from \"../../common/decorators/schema.decorator\";\nimport { Document } from \"mongoose\";\nimport { Cat } from \"../../cats/schemas/cat.schema\";\n\nexport type EnglandCatDocument = EnglandCat & Document;\n\n@Schema({\n inheritOption: true\n})\nexport class EnglandCat extends Cat {\n @Prop()\n numberLegs: number;\n}\n\nexport const EnglandCatSchema = SchemaFactory.createForClass(EnglandCat)\n\nEnglandCat is subclass of Cat and it inherits all options from Cat, you can overwrite some options if you want.\n",
"After fiddling around with it for a while I found the right combination that appears to work when leveraging these technologies\nHere is the base class\nbase.schema.ts\nimport { Prop, Schema } from '@nestjs/mongoose';\nimport { Document, Types } from 'mongoose';\nimport { TimeStamps } from './timestamps.schema';\n\nexport type BaseDocument = Base & Document;\n\n@Schema()\nexport class Base {\n @Prop({\n type: Types.ObjectId,\n required: true,\n default: Types.ObjectId,\n })\n nonce: Types.ObjectId;\n\n @Prop()\n timestamps: TimeStamps;\n}\n\nHere is the class that inherits the base.schema\nperson.schema.ts\nimport { Prop, Schema, SchemaFactory } from '@nestjs/mongoose';\nimport { Document, Types } from 'mongoose';\nimport { Address } from './address.schema';\nimport { Base } from './base.schema';\n\nexport type PersonDocument = Person & Document;\n\n@Schema({\n toJSON: {\n virtuals: true,\n transform: function (doc: any, ret: any) {\n delete ret._id;\n delete ret.__v;\n return ret;\n },\n },\n})\nexport class Person extends Base {\n @Prop({ required: true })\n first_name: string;\n\n @Prop({ required: true })\n last_name: string;\n\n @Prop()\n middle_name: string;\n\n @Prop()\n data_of_birth: Date;\n\n @Prop({ type: [{ type: Types.ObjectId, ref: 'Address' }] })\n addresses: [Address];\n}\nconst PersonSchema = SchemaFactory.createForClass(Person);\n\nPersonSchema.virtual('id').get(function (this: PersonDocument) {\n return this._id;\n});\n\nexport { PersonSchema };\n\nThe only thing I would like to improve on is moving the virtual('id') to the base class. However the schema inheritance does not work. At this point, it will only work with the Schema Definition. This at least gets me in the right direction. If anyone has a way to improve on this please contribute.\n",
"those Hieu Cao answer's is correct because the question is \"Schema Inheritance\". Your checked answer doesn't related about extending schema, it's about basic inherit class, you doesn't have any schema options in Base.\n"
] |
[
3,
1,
0
] |
[] |
[] |
[
"mongodb",
"mongoose",
"nestjs",
"node.js"
] |
stackoverflow_0069272891_mongodb_mongoose_nestjs_node.js.txt
|
Q:
Unable to install coursera-dl
I accidentally deleted a file I think called coursera-dl.exe from C:\python310\lib\site-packages. I tried to uninstall it using:
pip uninstall coursera-dl
it showed this warning:
WARNING: Ignoring invalid distribution -oursera-dl (c:\python310\lib\site-packages)
WARNING: Ignoring invalid distribution -oursera-dl (c:\python310\lib\site-packages)
but it was successfully uninstalled.
I tried to reinstall it using:
pip install coursera-dl
but it gives this error:
WARNING: Ignoring invalid distribution -oursera-dl (c:\python310\lib\site-packages)
WARNING: Ignoring invalid distribution -ip (c:\python310\lib\site-packages)
WARNING: Ignoring invalid distribution - (c:\python310\lib\site-packages)
WARNING: Ignoring invalid distribution -oursera-dl (c:\python310\lib\site-packages)
WARNING: Ignoring invalid distribution -ip (c:\python310\lib\site-packages)
WARNING: Ignoring invalid distribution - (c:\python310\lib\site-packages)
Requirement already satisfied: coursera-dl in c:\python310\lib\site-packages (0.11.5)
Requirement already satisfied: six>=1.5.0 in c:\python310\lib\site-packages (from coursera-dl) (1.16.0)
Requirement already satisfied: keyring>=4.0 in c:\python310\lib\site-packages (from coursera-dl) (23.9.1)
Requirement already satisfied: requests>=2.10.0 in c:\python310\lib\site-packages (from coursera-dl) (2.28.1)
Requirement already satisfied: beautifulsoup4>=4.1.3 in c:\python310\lib\site-packages (from coursera-dl) (4.11.1)
Requirement already satisfied: configargparse>=0.12.0 in c:\python310\lib\site-packages (from coursera-dl) (1.5.3)
Requirement already satisfied: pyasn1>=0.1.7 in c:\python310\lib\site-packages (from coursera-dl) (0.4.8)
Requirement already satisfied: attrs==18.1.0 in c:\python310\lib\site-packages (from coursera-dl) (18.1.0)
Requirement already satisfied: urllib3>=1.23 in c:\python310\lib\site-packages (from coursera-dl) (1.26.12)
Requirement already satisfied: soupsieve>1.2 in c:\python310\lib\site-packages (from beautifulsoup4>=4.1.3->coursera-dl) (2.3.2.post1)
Requirement already satisfied: jaraco.classes in c:\python310\lib\site-packages (from keyring>=4.0->coursera-dl) (3.2.2)
Requirement already satisfied: pywin32-ctypes!=0.1.0,!=0.1.1 in c:\python310\lib\site-packages (from keyring>=4.0->coursera-dl) (0.2.0)
Requirement already satisfied: idna<4,>=2.5 in c:\python310\lib\site-packages (from requests>=2.10.0->coursera-dl) (3.3)
Requirement already satisfied: charset-normalizer<3,>=2 in c:\python310\lib\site-packages (from requests>=2.10.0->coursera-dl) (2.1.1)
Requirement already satisfied: certifi>=2017.4.17 in c:\python310\lib\site-packages (from requests>=2.10.0->coursera-dl) (2022.6.15.1)
Requirement already satisfied: more-itertools in c:\python310\lib\site-packages (from jaraco.classes->keyring>=4.0->coursera-dl) (8.14.0)
WARNING: Ignoring invalid distribution -oursera-dl (c:\python310\lib\site-packages)
WARNING: Ignoring invalid distribution -ip (c:\python310\lib\site-packages)
WARNING: Ignoring invalid distribution - (c:\python310\lib\site-packages)
WARNING: Ignoring invalid distribution -oursera-dl (c:\python310\lib\site-packages)
WARNING: Ignoring invalid distribution -ip (c:\python310\lib\site-packages)
WARNING: Ignoring invalid distribution - (c:\python310\lib\site-packages)
Any help will be appreciated. Thanks in advance.
A:
Try:
pip install --upgrade --force-reinstall coursera-dl
or
pip install --ignore-installed coursera-dl
|
Unable to install coursera-dl
|
I accidentally deleted a file I think called coursera-dl.exe from C:\python310\lib\site-packages. I tried to uninstall it using:
pip uninstall coursera-dl
it showed this warning:
WARNING: Ignoring invalid distribution -oursera-dl (c:\python310\lib\site-packages)
WARNING: Ignoring invalid distribution -oursera-dl (c:\python310\lib\site-packages)
but it was successfully uninstalled.
I tried to reinstall it using:
pip install coursera-dl
but it gives this error:
WARNING: Ignoring invalid distribution -oursera-dl (c:\python310\lib\site-packages)
WARNING: Ignoring invalid distribution -ip (c:\python310\lib\site-packages)
WARNING: Ignoring invalid distribution - (c:\python310\lib\site-packages)
WARNING: Ignoring invalid distribution -oursera-dl (c:\python310\lib\site-packages)
WARNING: Ignoring invalid distribution -ip (c:\python310\lib\site-packages)
WARNING: Ignoring invalid distribution - (c:\python310\lib\site-packages)
Requirement already satisfied: coursera-dl in c:\python310\lib\site-packages (0.11.5)
Requirement already satisfied: six>=1.5.0 in c:\python310\lib\site-packages (from coursera-dl) (1.16.0)
Requirement already satisfied: keyring>=4.0 in c:\python310\lib\site-packages (from coursera-dl) (23.9.1)
Requirement already satisfied: requests>=2.10.0 in c:\python310\lib\site-packages (from coursera-dl) (2.28.1)
Requirement already satisfied: beautifulsoup4>=4.1.3 in c:\python310\lib\site-packages (from coursera-dl) (4.11.1)
Requirement already satisfied: configargparse>=0.12.0 in c:\python310\lib\site-packages (from coursera-dl) (1.5.3)
Requirement already satisfied: pyasn1>=0.1.7 in c:\python310\lib\site-packages (from coursera-dl) (0.4.8)
Requirement already satisfied: attrs==18.1.0 in c:\python310\lib\site-packages (from coursera-dl) (18.1.0)
Requirement already satisfied: urllib3>=1.23 in c:\python310\lib\site-packages (from coursera-dl) (1.26.12)
Requirement already satisfied: soupsieve>1.2 in c:\python310\lib\site-packages (from beautifulsoup4>=4.1.3->coursera-dl) (2.3.2.post1)
Requirement already satisfied: jaraco.classes in c:\python310\lib\site-packages (from keyring>=4.0->coursera-dl) (3.2.2)
Requirement already satisfied: pywin32-ctypes!=0.1.0,!=0.1.1 in c:\python310\lib\site-packages (from keyring>=4.0->coursera-dl) (0.2.0)
Requirement already satisfied: idna<4,>=2.5 in c:\python310\lib\site-packages (from requests>=2.10.0->coursera-dl) (3.3)
Requirement already satisfied: charset-normalizer<3,>=2 in c:\python310\lib\site-packages (from requests>=2.10.0->coursera-dl) (2.1.1)
Requirement already satisfied: certifi>=2017.4.17 in c:\python310\lib\site-packages (from requests>=2.10.0->coursera-dl) (2022.6.15.1)
Requirement already satisfied: more-itertools in c:\python310\lib\site-packages (from jaraco.classes->keyring>=4.0->coursera-dl) (8.14.0)
WARNING: Ignoring invalid distribution -oursera-dl (c:\python310\lib\site-packages)
WARNING: Ignoring invalid distribution -ip (c:\python310\lib\site-packages)
WARNING: Ignoring invalid distribution - (c:\python310\lib\site-packages)
WARNING: Ignoring invalid distribution -oursera-dl (c:\python310\lib\site-packages)
WARNING: Ignoring invalid distribution -ip (c:\python310\lib\site-packages)
WARNING: Ignoring invalid distribution - (c:\python310\lib\site-packages)
Any help will be appreciated. Thanks in advance.
|
[
"Try:\npip install --upgrade --force-reinstall coursera-dl\n\nor\npip install --ignore-installed coursera-dl\n\n"
] |
[
0
] |
[] |
[] |
[
"cmd",
"coursera_api",
"python"
] |
stackoverflow_0074631073_cmd_coursera_api_python.txt
|
Q:
Sprites are distorted when scaled
I have a game I'm working on with Phaser 3, and I have an icon for a sword in the player's inventory:
But when I scale it using sprite.setScale(0.5), I get this monstrosity:
Any idea why this happens? thanks!
A:
Ok, so the problem seems to be fixed by making sure the end value for the height and width values were integers, my actual code for scaling was sprite.setScale(0.45), and changing the value to 0.5 was a quick fix, getting the much improved end result of this:
PS: Thanks to @winner_joiner for the suggestion
|
Sprites are distorted when scaled
|
I have a game I'm working on with Phaser 3, and I have an icon for a sword in the player's inventory:
But when I scale it using sprite.setScale(0.5), I get this monstrosity:
Any idea why this happens? thanks!
|
[
"Ok, so the problem seems to be fixed by making sure the end value for the height and width values were integers, my actual code for scaling was sprite.setScale(0.45), and changing the value to 0.5 was a quick fix, getting the much improved end result of this: \nPS: Thanks to @winner_joiner for the suggestion\n"
] |
[
1
] |
[] |
[] |
[
"javascript",
"phaser_framework"
] |
stackoverflow_0074658043_javascript_phaser_framework.txt
|
Q:
Import-AzKeyVaultCertificate with -CertificateString throws error
I'm trying to import a self signed PFX certificate (with private key) in Azure Key Vault with the Import-AzKeyVaultCertificate command using the -CertificateString parameter.
But when I run this command I get the following error message:
Import-AzKeyVaultCertificate : The specified PKCS#12 X.509 certificate
content can not be read. Please check if certificate is in valid
PKCS#12 format. Status: 400 (Bad Request)
I can import the very same PFX certificate manually in Key Vault, without any problems. But I need to do this using -CertificateString for a deployment script.
So I converted my PFX certificate into a Base64 string using PowerShell:
$fileContentBytes = get-content ".\myCert.pfx" -Encoding Byte
[System.Convert]::ToBase64String($fileContentBytes) | Out-File ".\pfx-base64.txt"
Multiple sites showed that this is the way to convert a PFX cert to a Base64 string. One of them is this one: https://learn.microsoft.com/en-us/answers/questions/258583/import-certificate-api-for-azure-key-vault.html
I then use that string in PowerShell like so:
$Secure_String_Pwd = ConvertTo-SecureString "MySecretPassword" -AsPlainText -Force;
Import-AzKeyVaultCertificate -VaultName "MyKeyVault" -Name "cert-signing" -CertificateString "MIIJagIBAzCCCSYGCS.....9oV21QwICB9A=" -Password $Secure_String_Pwd;
I don't understand why its throwing an error. The certificate seems to be fine when I upload it manually. Why doesn't it work in a Base64 form?
A:
I tried to import certificate in my environment.
Here when trying to import certificate , it has to be imported with password.
For that the certificate while creating must be set with password so that while importing .pfx certificate private key is secured with password.
In cloud shell while setting up self signed certificate , set with password.
Or check below code from azure - Unable to "Import Certificate" using API in PowerShell - Stack Overflow
Check if the password is sent in correct format and try sending instead converting to securestring.
Ex:
$kvname = "newkaazurekeyvault"
$certname = "kaselfsignedcertific"
$tenantId ="xxxxxxxxx"
$subId="bxxxxxxx"
Connect-AzAccount -Subscription $subscriptionId -Tenant $tenantId
$resource="xxxxx"
$context= Get-AzContext
$token = [Microsoft.Azure.Commands.Common.Authentication.AzureSession]::Instance.AuthenticationFactory.Authenticate($context.Account,
$context.Environment,
$context.Tenant.Id.ToString(),
$null, [Microsoft.Azure.Commands.Common.Authentication.ShowDialog]::Never, $null, $resource).AccessToken
$pfxcontent = Get-Content ‘C:\Users\vxxx\kaazurekeyvault-kaselfsixxx-2xxx.pfx' -Encoding Byte
$base64pfxcontent = [System.Convert]::ToBase64String($pfxcontent)
$json_new = @{
value= $base64Stringpfxcontent
pwd= "Pxxx234"
policy= @{
secret_props= @{
contentType= "application/x-pkcs12"
}
}
}
$json = $json_new | ConvertTo-Json
$header = @{Authorization = "Bearer " + $token }
Invoke-RestMethod -Method Post -Uri "https://$kvname.vault.azure.net/certificates/$certname/import?api-version=7.0" -Body $json -Headers $header -ContentType "application/json"
So try to Export the certificate in PFX with password
$password = ConvertTo-SecureString "Password!" -AsPlainText -Force
Export-PfxCertificate -Cert "cert:\CurrentUser\My\$($cert.Thumbprint)" -FilePath C:\temp\cert2.pfx -Password $password
Then try to import that pfx certificate using the password.
Import-AzureKeyVaultCertificate -VaultName tempvault -Name certifcte -FilePath C:\temp\cert.pfx -Password $password
Then the certificate is imported successfully.
|
Import-AzKeyVaultCertificate with -CertificateString throws error
|
I'm trying to import a self signed PFX certificate (with private key) in Azure Key Vault with the Import-AzKeyVaultCertificate command using the -CertificateString parameter.
But when I run this command I get the following error message:
Import-AzKeyVaultCertificate : The specified PKCS#12 X.509 certificate
content can not be read. Please check if certificate is in valid
PKCS#12 format. Status: 400 (Bad Request)
I can import the very same PFX certificate manually in Key Vault, without any problems. But I need to do this using -CertificateString for a deployment script.
So I converted my PFX certificate into a Base64 string using PowerShell:
$fileContentBytes = get-content ".\myCert.pfx" -Encoding Byte
[System.Convert]::ToBase64String($fileContentBytes) | Out-File ".\pfx-base64.txt"
Multiple sites showed that this is the way to convert a PFX cert to a Base64 string. One of them is this one: https://learn.microsoft.com/en-us/answers/questions/258583/import-certificate-api-for-azure-key-vault.html
I then use that string in PowerShell like so:
$Secure_String_Pwd = ConvertTo-SecureString "MySecretPassword" -AsPlainText -Force;
Import-AzKeyVaultCertificate -VaultName "MyKeyVault" -Name "cert-signing" -CertificateString "MIIJagIBAzCCCSYGCS.....9oV21QwICB9A=" -Password $Secure_String_Pwd;
I don't understand why its throwing an error. The certificate seems to be fine when I upload it manually. Why doesn't it work in a Base64 form?
|
[
"I tried to import certificate in my environment.\nHere when trying to import certificate , it has to be imported with password.\nFor that the certificate while creating must be set with password so that while importing .pfx certificate private key is secured with password.\nIn cloud shell while setting up self signed certificate , set with password.\n\nOr check below code from azure - Unable to \"Import Certificate\" using API in PowerShell - Stack Overflow\nCheck if the password is sent in correct format and try sending instead converting to securestring.\nEx:\n$kvname = \"newkaazurekeyvault\"\n$certname = \"kaselfsignedcertific\"\n$tenantId =\"xxxxxxxxx\"\n$subId=\"bxxxxxxx\"\nConnect-AzAccount -Subscription $subscriptionId -Tenant $tenantId\n\n$resource=\"xxxxx\"\n$context= Get-AzContext\n$token = [Microsoft.Azure.Commands.Common.Authentication.AzureSession]::Instance.AuthenticationFactory.Authenticate($context.Account,\n $context.Environment, \n $context.Tenant.Id.ToString(), \n $null, [Microsoft.Azure.Commands.Common.Authentication.ShowDialog]::Never, $null, $resource).AccessToken\n\n$pfxcontent = Get-Content ‘C:\\Users\\vxxx\\kaazurekeyvault-kaselfsixxx-2xxx.pfx' -Encoding Byte\n$base64pfxcontent = [System.Convert]::ToBase64String($pfxcontent)\n\n$json_new = @{\n value= $base64Stringpfxcontent\n pwd= \"Pxxx234\"\n policy= @{\n secret_props= @{\n contentType= \"application/x-pkcs12\"\n }\n }\n}\n\n$json = $json_new | ConvertTo-Json\n\n$header = @{Authorization = \"Bearer \" + $token }\nInvoke-RestMethod -Method Post -Uri \"https://$kvname.vault.azure.net/certificates/$certname/import?api-version=7.0\" -Body $json -Headers $header -ContentType \"application/json\"\n\n\nSo try to Export the certificate in PFX with password\n$password = ConvertTo-SecureString \"Password!\" -AsPlainText -Force\n\nExport-PfxCertificate -Cert \"cert:\\CurrentUser\\My\\$($cert.Thumbprint)\" -FilePath C:\\temp\\cert2.pfx -Password $password\n\nThen try to import that pfx certificate using the password.\n Import-AzureKeyVaultCertificate -VaultName tempvault -Name certifcte -FilePath C:\\temp\\cert.pfx -Password $password\n\nThen the certificate is imported successfully.\n\n"
] |
[
0
] |
[] |
[] |
[
"azure_keyvault",
"pfx",
"powershell",
"tobase64string"
] |
stackoverflow_0074481906_azure_keyvault_pfx_powershell_tobase64string.txt
|
Q:
Node process.hrtime() is five years off?
I'm using Node v16.17 on MacBook Pro M1.
I want to use microsecond timestamps, so I tried process.hrtime().
But this is very strange, as the first array element (which should be seconds when multiplied by 1000) is like some date in 2017:
> new Date().getTime();
1669997280728
> process.hrtime();
[ 1486038, 90680583 ]
So, if I take 1486038000 --> it is Thu, 02 Feb 2017 12:20:00 GMT
If I take out the milliseconds from new Date().getTime() -> it is correctly Fri, 02 Dec 2022 16:08:00 GMT
What it the issue here? I thought process.hrtime() will be the high resolution time, but why is this so off?
Thanks
Fritz
A:
Per the docs,
These times are relative to an arbitrary time in the past, and not related to the time of day and therefore not subject to clock drift. The primary use is for measuring performance between intervals
https://nodejs.org/api/process.html#processhrtimetime
It is only a coincidence that you got a somewhat relevant date.
You should be using process.hrtime.bigint(), however, because process.hrtime() has been legacy for a while (even in Node v16.17).
A:
What?
process.hrtime() has nothing do to with the real-time clock, as is explained by the docs:
These times are relative to an arbitrary time in the past, and not related to the time of day and therefore not subject to clock drift.
(emphasis mine)
And,
The primary use is for measuring performance between intervals:
|
Node process.hrtime() is five years off?
|
I'm using Node v16.17 on MacBook Pro M1.
I want to use microsecond timestamps, so I tried process.hrtime().
But this is very strange, as the first array element (which should be seconds when multiplied by 1000) is like some date in 2017:
> new Date().getTime();
1669997280728
> process.hrtime();
[ 1486038, 90680583 ]
So, if I take 1486038000 --> it is Thu, 02 Feb 2017 12:20:00 GMT
If I take out the milliseconds from new Date().getTime() -> it is correctly Fri, 02 Dec 2022 16:08:00 GMT
What it the issue here? I thought process.hrtime() will be the high resolution time, but why is this so off?
Thanks
Fritz
|
[
"Per the docs,\n\nThese times are relative to an arbitrary time in the past, and not related to the time of day and therefore not subject to clock drift. The primary use is for measuring performance between intervals\nhttps://nodejs.org/api/process.html#processhrtimetime\n\nIt is only a coincidence that you got a somewhat relevant date.\nYou should be using process.hrtime.bigint(), however, because process.hrtime() has been legacy for a while (even in Node v16.17).\n",
"What?\nprocess.hrtime() has nothing do to with the real-time clock, as is explained by the docs:\n\nThese times are relative to an arbitrary time in the past, and not related to the time of day and therefore not subject to clock drift.\n\n(emphasis mine)\nAnd,\n\nThe primary use is for measuring performance between intervals:\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"apple_m1",
"javascript",
"node.js"
] |
stackoverflow_0074658634_apple_m1_javascript_node.js.txt
|
Q:
i wanted to ask how can connect dns sever with my apps . all trafic will go through that server
i wanted to ask how can connect dns sever with my apps . all trafic will go through that server .
i wanted to ask how can connect dns sever with my apps . all trafic will go through that server .
|
i wanted to ask how can connect dns sever with my apps . all trafic will go through that server
|
i wanted to ask how can connect dns sever with my apps . all trafic will go through that server .
i wanted to ask how can connect dns sever with my apps . all trafic will go through that server .
|
[] |
[] |
[
"First off, traffic doesn't go through a DNS server. It isn't a proxy. But let's assume you knew that and only wanted to use it for your IP lookups. In that case, you can't set it at the OS level. The OS level would apply to your whole phone, not just your app. And no such API exists because it would be a security issue- you could redirect bank URLs to your own server.\nWhat you could do in your own app is download a DNS library, do the query yourself, and then when you want to make an HTTP call you alter the URL to the IP provided from that lookup, instead of the hostname you'd usually use. That's a lot more work on your end, but would work.\n",
"You cannot set the DNS from an app. Use this link to set it from the settings. It would not make sense for an app to set it for the whole OS. However, you could use your own DNS api which lets you select the DNS you want to use.\n"
] |
[
-1,
-1
] |
[
"android",
"android_studio",
"java",
"networking",
"performance"
] |
stackoverflow_0074658279_android_android_studio_java_networking_performance.txt
|
Q:
How to customize a reusable Widget in Flutter
i'm using "SubCategoryTiles" widget all over the app. initially i was using same Group Icon but now i want to use different-different category_Icon wherever i use this widget. So i want to know how to do it. See the code and also the image with error i'm getting.
class SubCategoryTiles extends StatelessWidget {
const SubCategoryTiles({
required this.titleText,
required this.onTapHandler,
required this.category_Icon,
});
final Widget titleText;
final VoidCallback onTapHandler;
final IconData category_Icon;
@override
Widget build(BuildContext context) {
return ListTile(
leading: const CircleAvatar(
backgroundColor: Colors.white,
child: category_Icon,
// Icon(
// Icons.group,
// color: Colors.deepOrange,
// ),
),
title: titleText,
trailing: const Icon(Icons.arrow_right),
onTap: onTapHandler,
);
}
}
A:
Wrap your category_Icon in a Icon widget
child: Icon(category_Icon),
IconData is not a widget, but an Icon is.
A:
The should be like this:
class SubCategoryTiles extends StatelessWidget {
const SubCategoryTiles({
required this.titleText,
required this.onTapHandler,
required this.category_Icon,
});
final Widget titleText;
final VoidCallback onTapHandler;
final IconData category_Icon;
@override
Widget build(BuildContext context) {
return ListTile(
leading: CircleAvatar(
backgroundColor: Colors.white,
child: Icon(
category_Icon,
color: Colors.deepOrange,
),
// Icon(
// Icons.group,
// color: Colors.deepOrange,
// ),
),
title: titleText,
trailing: const Icon(Icons.arrow_right),
onTap: onTapHandler,
);
}
}
// This is how it should be added when you require it.
category_Icon: Icons.add_box,
|
How to customize a reusable Widget in Flutter
|
i'm using "SubCategoryTiles" widget all over the app. initially i was using same Group Icon but now i want to use different-different category_Icon wherever i use this widget. So i want to know how to do it. See the code and also the image with error i'm getting.
class SubCategoryTiles extends StatelessWidget {
const SubCategoryTiles({
required this.titleText,
required this.onTapHandler,
required this.category_Icon,
});
final Widget titleText;
final VoidCallback onTapHandler;
final IconData category_Icon;
@override
Widget build(BuildContext context) {
return ListTile(
leading: const CircleAvatar(
backgroundColor: Colors.white,
child: category_Icon,
// Icon(
// Icons.group,
// color: Colors.deepOrange,
// ),
),
title: titleText,
trailing: const Icon(Icons.arrow_right),
onTap: onTapHandler,
);
}
}
|
[
"Wrap your category_Icon in a Icon widget\nchild: Icon(category_Icon),\n\nIconData is not a widget, but an Icon is.\n",
"The should be like this:\nclass SubCategoryTiles extends StatelessWidget {\n const SubCategoryTiles({\n required this.titleText,\n required this.onTapHandler,\n required this.category_Icon,\n });\n\n final Widget titleText;\n final VoidCallback onTapHandler;\n final IconData category_Icon;\n @override\n Widget build(BuildContext context) {\n return ListTile(\n leading: CircleAvatar(\n backgroundColor: Colors.white,\n child: Icon(\n category_Icon,\n color: Colors.deepOrange,\n ),\n\n // Icon(\n // Icons.group,\n // color: Colors.deepOrange,\n // ),\n ),\n title: titleText,\n trailing: const Icon(Icons.arrow_right),\n onTap: onTapHandler,\n );\n }\n}\n\n// This is how it should be added when you require it.\ncategory_Icon: Icons.add_box,\n"
] |
[
0,
0
] |
[] |
[] |
[
"dart",
"flutter"
] |
stackoverflow_0074658056_dart_flutter.txt
|
Q:
NetFileEnum (API) always returns 5
I'm on the way to create an app, that catches changes in files (with FileSystemWatcher()) and then want to get the user, that has a specific file open (not the owner of the file).
I have found code for c# and translated it to vb.net.
Problem: The code compiles, but NetFileEnum always returns 5 (instead of 0).
I have not really experience with API functions, so maybe I do something wrong with the implementation.
Thanks for any answer.
My code:
Call:
dim cUsername = GetUsernameHandlingFile(<FileWithPath>)
'<FileWithPath> = E.g. M:\temp\abc.xls (whereby M:\ is a network drive)
Functions:
Imports System.Runtime.InteropServices
Module FileOffenDurch
<DllImport("Netapi32.dll", SetLastError:=True)>
Private Function NetApiBufferFree(ByVal Buffer As IntPtr) As Integer
End Function
<StructLayout(LayoutKind.Sequential, CharSet:=CharSet.Auto, Pack:=4)>
Structure FILE_INFO_3
Public fi3_id As Integer
Public fi3_permission As Integer
Public fi3_num_locks As Integer
Public fi3_pathname As String
Public fi3_username As String
End Structure
<DllImport("netapi32.dll", SetLastError:=True, CharSet:=CharSet.Unicode)>
Private Function NetFileEnum(ByVal servername As String, ByVal basepath As String, ByVal username As String, ByVal level As Integer, ByRef bufptr As IntPtr, ByVal prefmaxlen As Integer, <Out> ByRef entriesread As Integer, <Out> ByRef totalentries As Integer, ByVal resume_handle As IntPtr) As Integer
End Function
<DllImport("netapi32.dll", SetLastError:=True, CharSet:=CharSet.Unicode)>
Private Function NetFileGetInfo(ByVal servername As String, ByVal fileid As Integer, ByVal level As Integer, ByRef bufptr As IntPtr) As Integer
End Function
Public Function GetFileIdFromPath(ByVal filePath As String) As Integer
Const MAX_PREFERRED_LENGTH As Integer = -1
Dim dwReadEntries As Integer
Dim dwTotalEntries As Integer
Dim pBuffer As IntPtr = IntPtr.Zero
Dim pCurrent As FILE_INFO_3 = New FILE_INFO_3()
Dim dwStatus As Integer = NetFileEnum(Nothing, filePath, Nothing, 3, pBuffer, MAX_PREFERRED_LENGTH, dwReadEntries, dwTotalEntries, IntPtr.Zero)
' => dwStatus always returns 5 (instead of 0)
If dwStatus = 0 Then
For dwIndex As Integer = 0 To dwReadEntries - 1
Dim iPtr As IntPtr = New IntPtr(pBuffer.ToInt32() + (dwIndex * Marshal.SizeOf(pCurrent)))
pCurrent = CType(Marshal.PtrToStructure(iPtr, GetType(FILE_INFO_3)), FILE_INFO_3)
Dim fileId As Integer = pCurrent.fi3_id
NetApiBufferFree(pBuffer)
Return fileId
Next
End If
NetApiBufferFree(pBuffer)
Return -1
End Function
Public Function GetUsernameHandlingFile(ByVal fileId As Integer) As String
Dim defaultValue As String = "[Unknown User]"
If fileId = -1 Then
Return defaultValue
End If
Dim pBuffer_Info As IntPtr = IntPtr.Zero
Dim dwStatus_Info As Integer = NetFileGetInfo(Nothing, fileId, 3, pBuffer_Info)
If dwStatus_Info = 0 Then
Dim iPtr_Info As IntPtr = New IntPtr(pBuffer_Info.ToInt32())
Dim pCurrent_Info As FILE_INFO_3 = CType(Marshal.PtrToStructure(iPtr_Info, GetType(FILE_INFO_3)), FILE_INFO_3)
NetApiBufferFree(pBuffer_Info)
Return pCurrent_Info.fi3_username
End If
NetApiBufferFree(pBuffer_Info)
Return defaultValue
End Function
Public Function GetUsernameHandlingFile(ByVal filePath As String) As String
Dim fileId As Integer = GetFileIdFromPath(filePath)
' Always returns -1
Return GetUsernameHandlingFile(fileId)
End Function
End Module
A:
First, thanks to all that have posted messages here.
The good news:
My posted code works…
The bad news:
It’s a (heavy) problem with the rights.
The only way, I get it to work was, to run the app directly on the fileserver with admin rights (login on the server with Administrator).
I had local admin rights on my client and my account was in the workstation admin group (of the domain), added groups "domain admin" and "administrator" to my account… don’t work.
Logged in as (local) administrator… don’t work.
Tried also unc path (instead of the logical drive)… don’t work.
So… the purpose of the app was, to let the user:
config some drives (and directories) to monitor with FileSystemWatcher()
select the datatypes to monitor (e.g. .docx, .xlsx)
get and store all accesses to the files
if the file is stored on a server - get the username of the user that has stored the file as last with NetFileEnum() - and store also the user name to the data
let the user set a filter to only his own files (for network drives)
So… unfortunately - for me (my app) is NetFileEnum() simply not usable.
|
NetFileEnum (API) always returns 5
|
I'm on the way to create an app, that catches changes in files (with FileSystemWatcher()) and then want to get the user, that has a specific file open (not the owner of the file).
I have found code for c# and translated it to vb.net.
Problem: The code compiles, but NetFileEnum always returns 5 (instead of 0).
I have not really experience with API functions, so maybe I do something wrong with the implementation.
Thanks for any answer.
My code:
Call:
dim cUsername = GetUsernameHandlingFile(<FileWithPath>)
'<FileWithPath> = E.g. M:\temp\abc.xls (whereby M:\ is a network drive)
Functions:
Imports System.Runtime.InteropServices
Module FileOffenDurch
<DllImport("Netapi32.dll", SetLastError:=True)>
Private Function NetApiBufferFree(ByVal Buffer As IntPtr) As Integer
End Function
<StructLayout(LayoutKind.Sequential, CharSet:=CharSet.Auto, Pack:=4)>
Structure FILE_INFO_3
Public fi3_id As Integer
Public fi3_permission As Integer
Public fi3_num_locks As Integer
Public fi3_pathname As String
Public fi3_username As String
End Structure
<DllImport("netapi32.dll", SetLastError:=True, CharSet:=CharSet.Unicode)>
Private Function NetFileEnum(ByVal servername As String, ByVal basepath As String, ByVal username As String, ByVal level As Integer, ByRef bufptr As IntPtr, ByVal prefmaxlen As Integer, <Out> ByRef entriesread As Integer, <Out> ByRef totalentries As Integer, ByVal resume_handle As IntPtr) As Integer
End Function
<DllImport("netapi32.dll", SetLastError:=True, CharSet:=CharSet.Unicode)>
Private Function NetFileGetInfo(ByVal servername As String, ByVal fileid As Integer, ByVal level As Integer, ByRef bufptr As IntPtr) As Integer
End Function
Public Function GetFileIdFromPath(ByVal filePath As String) As Integer
Const MAX_PREFERRED_LENGTH As Integer = -1
Dim dwReadEntries As Integer
Dim dwTotalEntries As Integer
Dim pBuffer As IntPtr = IntPtr.Zero
Dim pCurrent As FILE_INFO_3 = New FILE_INFO_3()
Dim dwStatus As Integer = NetFileEnum(Nothing, filePath, Nothing, 3, pBuffer, MAX_PREFERRED_LENGTH, dwReadEntries, dwTotalEntries, IntPtr.Zero)
' => dwStatus always returns 5 (instead of 0)
If dwStatus = 0 Then
For dwIndex As Integer = 0 To dwReadEntries - 1
Dim iPtr As IntPtr = New IntPtr(pBuffer.ToInt32() + (dwIndex * Marshal.SizeOf(pCurrent)))
pCurrent = CType(Marshal.PtrToStructure(iPtr, GetType(FILE_INFO_3)), FILE_INFO_3)
Dim fileId As Integer = pCurrent.fi3_id
NetApiBufferFree(pBuffer)
Return fileId
Next
End If
NetApiBufferFree(pBuffer)
Return -1
End Function
Public Function GetUsernameHandlingFile(ByVal fileId As Integer) As String
Dim defaultValue As String = "[Unknown User]"
If fileId = -1 Then
Return defaultValue
End If
Dim pBuffer_Info As IntPtr = IntPtr.Zero
Dim dwStatus_Info As Integer = NetFileGetInfo(Nothing, fileId, 3, pBuffer_Info)
If dwStatus_Info = 0 Then
Dim iPtr_Info As IntPtr = New IntPtr(pBuffer_Info.ToInt32())
Dim pCurrent_Info As FILE_INFO_3 = CType(Marshal.PtrToStructure(iPtr_Info, GetType(FILE_INFO_3)), FILE_INFO_3)
NetApiBufferFree(pBuffer_Info)
Return pCurrent_Info.fi3_username
End If
NetApiBufferFree(pBuffer_Info)
Return defaultValue
End Function
Public Function GetUsernameHandlingFile(ByVal filePath As String) As String
Dim fileId As Integer = GetFileIdFromPath(filePath)
' Always returns -1
Return GetUsernameHandlingFile(fileId)
End Function
End Module
|
[
"First, thanks to all that have posted messages here.\nThe good news:\nMy posted code works…\nThe bad news:\nIt’s a (heavy) problem with the rights.\nThe only way, I get it to work was, to run the app directly on the fileserver with admin rights (login on the server with Administrator).\nI had local admin rights on my client and my account was in the workstation admin group (of the domain), added groups \"domain admin\" and \"administrator\" to my account… don’t work.\nLogged in as (local) administrator… don’t work.\nTried also unc path (instead of the logical drive)… don’t work.\nSo… the purpose of the app was, to let the user:\n\nconfig some drives (and directories) to monitor with FileSystemWatcher()\nselect the datatypes to monitor (e.g. .docx, .xlsx)\nget and store all accesses to the files\nif the file is stored on a server - get the username of the user that has stored the file as last with NetFileEnum() - and store also the user name to the data\nlet the user set a filter to only his own files (for network drives)\n\nSo… unfortunately - for me (my app) is NetFileEnum() simply not usable.\n"
] |
[
0
] |
[] |
[] |
[
"vb.net",
"winapi"
] |
stackoverflow_0074602635_vb.net_winapi.txt
|
Q:
How can we integrate an external rest api call via slate in palantir foundry?
I wish to integrate an external rest api within a slate application?
Does Foundry allow calling external api's from SLATE, if yes how can we achieve the same?
A:
Slate is self contained, so you won't be able to do external http requests due to XSS protections. This would to a limit enable you to leak data outside of Foundry, so it's unlikely that you'll find a direct way of getting this to work.
Alternatively, is this external API call something you can pre-empt and cache? if yes then you could use a magrite-rest-call to ingest data from your endpoint to a dataset, at regular intervals, and query this dataset instead of the external API.
A:
In our Foundry instance, we can call external HTTP(s) destinations from Slate. Means technically it's possible. The configuration is done by the Palantir engineers.
If this integration makes sense or is recommended is a different discussion.
A:
Yes, Slate enables safely calling any external REST API's through HTTPJSON requests via the Queries tab.
In order for the these queries to be made, the REST API needs to be configured as a Slate Datasource which can currently only be configured by Palantir admins, so just reach out to your Palantir rep and they should be able to get you sorted.
The configuration of a Slate Datasource is necessary since Slate differentiates queries made between Edit mode and View mode such that viewers of a Slate app are prevented from seeing the exact requests. This prevents possible bad actors from gleaning information of external architecture and helps keep your sources safe.
|
How can we integrate an external rest api call via slate in palantir foundry?
|
I wish to integrate an external rest api within a slate application?
Does Foundry allow calling external api's from SLATE, if yes how can we achieve the same?
|
[
"Slate is self contained, so you won't be able to do external http requests due to XSS protections. This would to a limit enable you to leak data outside of Foundry, so it's unlikely that you'll find a direct way of getting this to work.\nAlternatively, is this external API call something you can pre-empt and cache? if yes then you could use a magrite-rest-call to ingest data from your endpoint to a dataset, at regular intervals, and query this dataset instead of the external API.\n",
"In our Foundry instance, we can call external HTTP(s) destinations from Slate. Means technically it's possible. The configuration is done by the Palantir engineers.\nIf this integration makes sense or is recommended is a different discussion.\n",
"Yes, Slate enables safely calling any external REST API's through HTTPJSON requests via the Queries tab.\nIn order for the these queries to be made, the REST API needs to be configured as a Slate Datasource which can currently only be configured by Palantir admins, so just reach out to your Palantir rep and they should be able to get you sorted.\nThe configuration of a Slate Datasource is necessary since Slate differentiates queries made between Edit mode and View mode such that viewers of a Slate app are prevented from seeing the exact requests. This prevents possible bad actors from gleaning information of external architecture and helps keep your sources safe.\n"
] |
[
1,
1,
1
] |
[] |
[] |
[
"api",
"external",
"palantir_foundry"
] |
stackoverflow_0064641139_api_external_palantir_foundry.txt
|
Q:
Setting corner radius on alertDialog using Builder. Android
I need to set rounded corners on an AlertDialog but specifically using a Builder.
Is it possible to set my own layout?
when using builder.setview(R.layout.my_style) i get a crash.
A:
you have added Style in layout so its crashed.
It should be
AlertDialog.Builder builder = new AlertDialog.Builder(this, R.style.my_style);
LayoutInflater inflater = this.getLayoutInflater();
View dialogView = inflater.inflate(R.layout.xml_ui_file, null);//Add your XML File name
dialogBuilder.setView(dialogView);
AlertDialog alertDialog = builder.create();
alertDialog.show();
|
Setting corner radius on alertDialog using Builder. Android
|
I need to set rounded corners on an AlertDialog but specifically using a Builder.
Is it possible to set my own layout?
when using builder.setview(R.layout.my_style) i get a crash.
|
[
"you have added Style in layout so its crashed.\nIt should be\nAlertDialog.Builder builder = new AlertDialog.Builder(this, R.style.my_style);\nLayoutInflater inflater = this.getLayoutInflater();\nView dialogView = inflater.inflate(R.layout.xml_ui_file, null);//Add your XML File name\ndialogBuilder.setView(dialogView);\n AlertDialog alertDialog = builder.create();\nalertDialog.show();\n\n"
] |
[
1
] |
[] |
[] |
[
"android",
"android_alertdialog",
"xml"
] |
stackoverflow_0074657895_android_android_alertdialog_xml.txt
|
Q:
Migrate single VM to azure
We are trying to migrate a VM in a private cloud to azure. This VM has multiple web applications and databases. We don't have access to the virtualization, just access to this single VM.
Can anyone suggest how we can do the migration to azure with just having access to the VM itself?
Regards
Anup
A:
As far as I know and as per the document, we can't migrate Single VM without having access to Virtualization.
Because while processing Migration you need to generate Project Key where you need to have access Virtualization.
You can go through the Microsoft Document for further details.
A:
I would like to migrate single VM to Azure Cloud using Azure Migration tool.
I know all the process how to do it. I just want to know about the data how it will replicate to azure and do we need to create a vm in azure before migration or azure will automatically deploy the VM.
|
Migrate single VM to azure
|
We are trying to migrate a VM in a private cloud to azure. This VM has multiple web applications and databases. We don't have access to the virtualization, just access to this single VM.
Can anyone suggest how we can do the migration to azure with just having access to the VM itself?
Regards
Anup
|
[
"\nAs far as I know and as per the document, we can't migrate Single VM without having access to Virtualization.\n\nBecause while processing Migration you need to generate Project Key where you need to have access Virtualization.\n\n\nYou can go through the Microsoft Document for further details.\n\n\n",
"I would like to migrate single VM to Azure Cloud using Azure Migration tool.\nI know all the process how to do it. I just want to know about the data how it will replicate to azure and do we need to create a vm in azure before migration or azure will automatically deploy the VM.\n"
] |
[
0,
0
] |
[] |
[] |
[
"azure",
"migration",
"server",
"virtual_machine"
] |
stackoverflow_0072526045_azure_migration_server_virtual_machine.txt
|
Q:
Jitsi Meet - event for user muted by other participant in the meeting
I have requirement to show custom notification, if the user is muted by other participant in the meeting.
I have used audiomutestatuschanged event, It is detected when user mutes/unmutes himself but it is not detected when muted by someone else in the meeting.
any other hacks to implement this.
Good day. Thanks in advance.
A:
in this cases when you need to do some custom stuff you can use the commands, of course you need to create a specific command for the muted/ unmuted and then just you can pass the props you need like info about who is trying to mute another participant. With the command listener of your command, with the data that you've received now you can use it to do whatever you need
|
Jitsi Meet - event for user muted by other participant in the meeting
|
I have requirement to show custom notification, if the user is muted by other participant in the meeting.
I have used audiomutestatuschanged event, It is detected when user mutes/unmutes himself but it is not detected when muted by someone else in the meeting.
any other hacks to implement this.
Good day. Thanks in advance.
|
[
"in this cases when you need to do some custom stuff you can use the commands, of course you need to create a specific command for the muted/ unmuted and then just you can pass the props you need like info about who is trying to mute another participant. With the command listener of your command, with the data that you've received now you can use it to do whatever you need\n"
] |
[
0
] |
[] |
[] |
[
"jitsi",
"jitsi_meet",
"lib_jitsi_meet",
"vue.js"
] |
stackoverflow_0074371102_jitsi_jitsi_meet_lib_jitsi_meet_vue.js.txt
|
Q:
Two views within a view, top with button and the lower not showing the firebase data
Simple project to be able to understand the concept.
I have two views UpperView and LowerView. The UpperView has a button, when clicked the button calls a ViewModel that fetches data from firebase. My problem is displaying the fetched data in the LowerView. I initialize the ViewModel in the LowerView so that I can access the fetched data through a @Published property but it doesn't work. It's a pretty simple case that I have built in order to understand the concept. Here is the code for UpperView, LowerView and the ViewModel. HomeView is the combination of the UpperView and the LowerView. It feels as if the data is loaded after the LowerView is displayed. All help will be appreciated!!
import Foundation
class MergeViewModel: ObservableObject {
@Published var clients: [Client] = [Client]()
func fetchAllClients() {
COLLECTION_CLIENTS.getDocuments { querySnapshot, error in
if let error = error {
print(error.localizedDescription)
return
}
guard let documents = querySnapshot?.documents else { return }
self.clients = documents.compactMap({ try? $0.data(as: Client.self)})
print(self.clients.count)
}
}
}
import SwiftUI
struct UpperView: View {
@ObservedObject var viewModel = MergeViewModel()
@State var numberOfClients: Int = 0
@State var buttonPressed: Int = 0
@State var clients: [Client] = [Client]()
var body: some View {
ZStack {
Color(.red)
VStack{
Text("This is UPPER VIEW ")
.foregroundColor(.white)
Text("We have \(numberOfClients) of clients!")
Text("Button pressed \(buttonPressed)")
Button(action: {
viewModel.fetchAllClients()
numberOfClients = viewModel.clients.count
buttonPressed += 1
}, label: {
Text("Press")
.frame(width: 100, height: 50)
.background(Color.white.opacity(0.50))
.cornerRadius(10)
})
}
}.ignoresSafeArea()
}
}
struct UpperView_Previews: PreviewProvider {
static var previews: some View {
UpperView()
}
}
import SwiftUI
struct LowerView: View {
@ObservedObject var viewModel = MergeViewModel()
var body: some View {
VStack {
Text("This is LOWER VIEW")
.foregroundColor(.black)
Text("\(viewModel.clients.count)")
.foregroundColor(.black)
List(viewModel.clients) { client in
Text(client.clientName)
.foregroundColor(.black)
}
}
}
}
struct LowerView_Previews: PreviewProvider {
static var previews: some View {
LowerView()
}
}
import SwiftUI
struct HomeView: View {
var body: some View {
NavigationView {
VStack {
UpperView()
LowerView()
Spacer()
}
.navigationTitle("")
.navigationBarTitleDisplayMode(.inline)
.toolbar {
ToolbarItem(placement: .principal) {
HStack {
Image("logo_silueta")
.resizable()
.scaledToFit()
.frame(width: 30)
Text("TheJump")
.font(.subheadline)
.foregroundColor(.gray.opacity(0.8))
}
}
ToolbarItem(placement: .navigationBarTrailing) {
Button(action: {
AuthViewModel.shared.signOut()
}, label: {
Text("logout")
})
}
}
}
}
}
A:
Thanks for your input!
Here is how it works correctly:
ViewModel
import Foundation
class MergeViewModel: ObservableObject {
@Published var clients: [Client] = [Client]()
init(){
fetchAllClients()
}
func fetchAllClients() {
COLLECTION_CLIENTS.getDocuments { querySnapshot, error in
if let error = error {
print(error.localizedDescription)
return
}
guard let documents = querySnapshot?.documents else { return }
self.clients = documents.compactMap({ try? $0.data(as: Client.self)})
print(self.clients.count)
}
}
}
UpperView
import SwiftUI
struct UpperView: View {
@ObservedObject var viewModel: MergeViewModel
@State var numberOfClients: Int = 0
@State var buttonPressed: Int = 0
@State var clients: [Client] = [Client]()
var body: some View {
ZStack {
Color(.red)
VStack{
Text("This is UPPER VIEW ")
.foregroundColor(.white)
Text("We have \(numberOfClients) of clients!")
Text("Button pressed \(buttonPressed)")
Button(action: {
viewModel.fetchAllClients()
numberOfClients = viewModel.clients.count
buttonPressed += 1
}, label: {
Text("Press")
.frame(width: 100, height: 50)
.background(Color.white.opacity(0.50))
.cornerRadius(10)
})
}
}.ignoresSafeArea()
}
}
struct UpperView_Previews: PreviewProvider {
static var previews: some View {
UpperView(viewModel: MergeViewModel())
}
}
LowerView
struct LowerView: View {
@ObservedObject var viewModel: MergeViewModel
var body: some View {
VStack {
Text("This is LOWER VIEW")
.foregroundColor(.black)
Text("\(viewModel.clients.count)")
.foregroundColor(.black)
List(viewModel.clients) { client in
Text(client.clientName)
.foregroundColor(.black)
}
}
}
}
struct LowerView_Previews: PreviewProvider {
static var previews: some View {
LowerView(viewModel: MergeViewModel())
}
}
|
Two views within a view, top with button and the lower not showing the firebase data
|
Simple project to be able to understand the concept.
I have two views UpperView and LowerView. The UpperView has a button, when clicked the button calls a ViewModel that fetches data from firebase. My problem is displaying the fetched data in the LowerView. I initialize the ViewModel in the LowerView so that I can access the fetched data through a @Published property but it doesn't work. It's a pretty simple case that I have built in order to understand the concept. Here is the code for UpperView, LowerView and the ViewModel. HomeView is the combination of the UpperView and the LowerView. It feels as if the data is loaded after the LowerView is displayed. All help will be appreciated!!
import Foundation
class MergeViewModel: ObservableObject {
@Published var clients: [Client] = [Client]()
func fetchAllClients() {
COLLECTION_CLIENTS.getDocuments { querySnapshot, error in
if let error = error {
print(error.localizedDescription)
return
}
guard let documents = querySnapshot?.documents else { return }
self.clients = documents.compactMap({ try? $0.data(as: Client.self)})
print(self.clients.count)
}
}
}
import SwiftUI
struct UpperView: View {
@ObservedObject var viewModel = MergeViewModel()
@State var numberOfClients: Int = 0
@State var buttonPressed: Int = 0
@State var clients: [Client] = [Client]()
var body: some View {
ZStack {
Color(.red)
VStack{
Text("This is UPPER VIEW ")
.foregroundColor(.white)
Text("We have \(numberOfClients) of clients!")
Text("Button pressed \(buttonPressed)")
Button(action: {
viewModel.fetchAllClients()
numberOfClients = viewModel.clients.count
buttonPressed += 1
}, label: {
Text("Press")
.frame(width: 100, height: 50)
.background(Color.white.opacity(0.50))
.cornerRadius(10)
})
}
}.ignoresSafeArea()
}
}
struct UpperView_Previews: PreviewProvider {
static var previews: some View {
UpperView()
}
}
import SwiftUI
struct LowerView: View {
@ObservedObject var viewModel = MergeViewModel()
var body: some View {
VStack {
Text("This is LOWER VIEW")
.foregroundColor(.black)
Text("\(viewModel.clients.count)")
.foregroundColor(.black)
List(viewModel.clients) { client in
Text(client.clientName)
.foregroundColor(.black)
}
}
}
}
struct LowerView_Previews: PreviewProvider {
static var previews: some View {
LowerView()
}
}
import SwiftUI
struct HomeView: View {
var body: some View {
NavigationView {
VStack {
UpperView()
LowerView()
Spacer()
}
.navigationTitle("")
.navigationBarTitleDisplayMode(.inline)
.toolbar {
ToolbarItem(placement: .principal) {
HStack {
Image("logo_silueta")
.resizable()
.scaledToFit()
.frame(width: 30)
Text("TheJump")
.font(.subheadline)
.foregroundColor(.gray.opacity(0.8))
}
}
ToolbarItem(placement: .navigationBarTrailing) {
Button(action: {
AuthViewModel.shared.signOut()
}, label: {
Text("logout")
})
}
}
}
}
}
|
[
"Thanks for your input!\nHere is how it works correctly:\nViewModel\nimport Foundation\n\n\nclass MergeViewModel: ObservableObject {\n @Published var clients: [Client] = [Client]()\n \n init(){\n fetchAllClients()\n }\n \n func fetchAllClients() {\n COLLECTION_CLIENTS.getDocuments { querySnapshot, error in\n if let error = error {\n print(error.localizedDescription)\n return\n }\n guard let documents = querySnapshot?.documents else { return }\n self.clients = documents.compactMap({ try? $0.data(as: Client.self)})\n print(self.clients.count)\n }\n }\n}\n\nUpperView\nimport SwiftUI\n\nstruct UpperView: View {\n @ObservedObject var viewModel: MergeViewModel\n @State var numberOfClients: Int = 0\n @State var buttonPressed: Int = 0\n @State var clients: [Client] = [Client]()\n \n var body: some View {\n ZStack {\n Color(.red)\n VStack{\n \n Text(\"This is UPPER VIEW \")\n .foregroundColor(.white)\n Text(\"We have \\(numberOfClients) of clients!\")\n Text(\"Button pressed \\(buttonPressed)\")\n \n Button(action: {\n viewModel.fetchAllClients()\n numberOfClients = viewModel.clients.count\n buttonPressed += 1\n }, label: {\n Text(\"Press\")\n .frame(width: 100, height: 50)\n .background(Color.white.opacity(0.50))\n .cornerRadius(10)\n })\n }\n \n }.ignoresSafeArea()\n }\n}\n\nstruct UpperView_Previews: PreviewProvider {\n static var previews: some View {\n UpperView(viewModel: MergeViewModel())\n }\n}\n\nLowerView\nstruct LowerView: View {\n @ObservedObject var viewModel: MergeViewModel\n \n var body: some View {\n \n VStack {\n Text(\"This is LOWER VIEW\")\n .foregroundColor(.black)\n Text(\"\\(viewModel.clients.count)\")\n .foregroundColor(.black)\n List(viewModel.clients) { client in\n Text(client.clientName)\n .foregroundColor(.black)\n }\n }\n }\n}\n\nstruct LowerView_Previews: PreviewProvider {\n static var previews: some View {\n LowerView(viewModel: MergeViewModel())\n }\n}\n\n"
] |
[
0
] |
[] |
[] |
[
"swiftui"
] |
stackoverflow_0074654912_swiftui.txt
|
Q:
Changing the content of c source code isn't changing the output
I had this code previously
#include <stdio.h>
int main(void)
{
printf("hello");
}
And it printed hello.
Now I changed the printf content to WHAT
#include <stdio.h>
int main(void)
{
printf("WHAT");
}
But it's still printing hello.
Here are my c_cpp_properties.json code, which I modified due go gcc not working when the path is correctly put into the environment variable.
{
"configurations": [
{
"name": "Win32",
"includePath": [
"C:\\MinGW\\include"
],
"defines": [
"_DEBUG",
"UNICODE",
"_UNICODE"
],
"cStandard": "c17",
"cppStandard": "c++17",
"intelliSenseMode": "windows-gcc-x64",
"compilerPath": "C:/MinGW/bin/gcc.exe"
}
],
"version": 4
}
I tried making testing.c a blank by deleting all the codes, but when execute it by putting
.\testing
It still prints hello when it's not supposed to.
All of these are happening without encountering any problems.
I am using C/C++ extension v1.12.4
A:
C is a compiled language. That means that you use the source code (your code in testing.c file) to create an executable file, probably an .exe file in Windows. This is different from programs written in interpreted languages like python, which are run by another program like py.exe (This statement is only partly true because executables from compilation are usually started by a loader program in the operating system, but that is not important in this context.).
You should be able to compile your program like this:
gcc -o testing.exe testing.c
The -o testing.exe allows you to specify the name of the executable file you want to create, testing.exe in the example I gave. If you do not include -o testing.exe, then your program will probably be written to a new file named a.out (or maybe a.exe because you are on Windows and using MinGW).
You can then execute the file generated by gcc in your terminal like so:
./testing.exe
|
Changing the content of c source code isn't changing the output
|
I had this code previously
#include <stdio.h>
int main(void)
{
printf("hello");
}
And it printed hello.
Now I changed the printf content to WHAT
#include <stdio.h>
int main(void)
{
printf("WHAT");
}
But it's still printing hello.
Here are my c_cpp_properties.json code, which I modified due go gcc not working when the path is correctly put into the environment variable.
{
"configurations": [
{
"name": "Win32",
"includePath": [
"C:\\MinGW\\include"
],
"defines": [
"_DEBUG",
"UNICODE",
"_UNICODE"
],
"cStandard": "c17",
"cppStandard": "c++17",
"intelliSenseMode": "windows-gcc-x64",
"compilerPath": "C:/MinGW/bin/gcc.exe"
}
],
"version": 4
}
I tried making testing.c a blank by deleting all the codes, but when execute it by putting
.\testing
It still prints hello when it's not supposed to.
All of these are happening without encountering any problems.
I am using C/C++ extension v1.12.4
|
[
"C is a compiled language. That means that you use the source code (your code in testing.c file) to create an executable file, probably an .exe file in Windows. This is different from programs written in interpreted languages like python, which are run by another program like py.exe (This statement is only partly true because executables from compilation are usually started by a loader program in the operating system, but that is not important in this context.).\nYou should be able to compile your program like this:\ngcc -o testing.exe testing.c\nThe -o testing.exe allows you to specify the name of the executable file you want to create, testing.exe in the example I gave. If you do not include -o testing.exe, then your program will probably be written to a new file named a.out (or maybe a.exe because you are on Windows and using MinGW).\nYou can then execute the file generated by gcc in your terminal like so:\n./testing.exe\n"
] |
[
0
] |
[] |
[] |
[
"c",
"gcc",
"mingw"
] |
stackoverflow_0074658477_c_gcc_mingw.txt
|
Q:
Spring Boot 2.7.5 + Angular 15 as a single war
I'm working on a fullstack app having spring boot v2.7.5 as backend and Angular v15 as frontend. I use IntelliJ IDEA IDE for development. Locally, springboot runs on http://localhost:8080 and angular runs on http://localhost:4200. I use gradle to build the project a single war file and which would be deployed on external tomcat server.
Following is the project structure:
I have 3 build.gradle files, 1 for frontend , 1 for backend, and 1 for global. When I run the global build.gradle file, it would call call build.gradle from fronend folder which builds angular project and copies all the build files and put them into backend/src/main/resources/static folder. Next, build.gradle from backend gets called which would build final war file to be deployed on external tomcat server.
The reason I'm putting frontend build files (index.html, some .js files) into backend/src/main/resources/static is the fact that Spring Boot Serves static content from that location. more details .
So the static directory looks like this after adding frontend build files:
When I try to access http://localhost:8080, it loads index.html from static folder.
So far it is good. When I click login button, internally it calls backend api and move to next page (home page i.e., http://localhost:8080/fe/appInstances).
Now if I refresh the page, it gives me the following 404 Whitelabel Error Page.
I understand that since this is springboot as it is looking for definition of http://localhost:8080/fe/appInstances api end point in the java code.
To fix this, I have created the following IndexController.java class which should redirect all the frontend rest end points to index.html which is present at main/resources/static folder.
IndexController.java
@Controller
public class IndexController {
@GetMapping("/")
public String index() {
return "redirect:/index";
}
@GetMapping("/fe/*")
public String anyFrontEndApi() {
return "index";
}
}
But now, I get the following Whilelabel error page about Circular view path [index]: would dispatch back to the current handler URL [/fe/index] again.
I have tried Changing @Controller to @RestController and changing the return type to ModelandView something like this. But irrespective of all, it is still giving me the Whitelable Error Page about Cicular view path...
@RestController
public class IndexController {
@GetMapping("/")
public String index() {
return "redirect:/index";
}
@GetMapping("/fe/*")
public ModelAndView anyFrontEndApi() {
ModelAndView mv = new ModelAndView();
mv.setViewName("index");
return mv;
}
}
Am I missing something here? Can someone please suggest me a fix for this?
PS: @justthink addressed this situation here. But I don't know how to do reserve proxy way.
A:
We had this situation of page refresh for Angular and Springboot and we resolved this by adding the below Configuration class extending WebMvcConfigurerAdapter
@Configuration
public class WebMvcConfig extends WebMvcConfigurerAdapter {
@Override
public void addResourceHandlers(ResourceHandlerRegistry registry) {
registry.addResourceHandler("/**/*")
.addResourceLocations("classpath:/static/")
.resourceChain(true)
.addResolver(new PathResourceResolver() {
@Override
protected Resource getResource(String resourcePath, Resource location) throws IOException {
Resource requestedResource = location.createRelative(resourcePath);
return requestedResource.exists() && requestedResource.isReadable() ? requestedResource
: new ClassPathResource("/static/index.html");
}
});
}
}
So basically, we are telling Springboot that if we have the resource, use the same if not then redirect it to index.html.
Now, to handle the path in Angular, it depends on how you would have written your routes. If the path is available, you show the page, if not, display 404 page.
Hope this helps.
Update 1:
WebMvcConfigurerAdapter is deprecated. If this causes any trouble, then instead of extending the class WebMvcConfigurerAdapter, you can implement WebMvcConfigurer
A:
If you see the whitelabel error says that "this application has no explicit mapping for /error".
That means if no path is matched with the paths that are defined in controller mappings, it forwards the request to "/error" route. So we can override this default behaviour.
Spring provides ErrorController interface to override this functionality
import org.springframework.boot.web.servlet.error.ErrorController;
import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.RequestMapping;
@Controller
public class CustomErrorController implements ErrorController {
@RequestMapping("/error")
public String handleError() {
return "forward:/";
}
}
|
Spring Boot 2.7.5 + Angular 15 as a single war
|
I'm working on a fullstack app having spring boot v2.7.5 as backend and Angular v15 as frontend. I use IntelliJ IDEA IDE for development. Locally, springboot runs on http://localhost:8080 and angular runs on http://localhost:4200. I use gradle to build the project a single war file and which would be deployed on external tomcat server.
Following is the project structure:
I have 3 build.gradle files, 1 for frontend , 1 for backend, and 1 for global. When I run the global build.gradle file, it would call call build.gradle from fronend folder which builds angular project and copies all the build files and put them into backend/src/main/resources/static folder. Next, build.gradle from backend gets called which would build final war file to be deployed on external tomcat server.
The reason I'm putting frontend build files (index.html, some .js files) into backend/src/main/resources/static is the fact that Spring Boot Serves static content from that location. more details .
So the static directory looks like this after adding frontend build files:
When I try to access http://localhost:8080, it loads index.html from static folder.
So far it is good. When I click login button, internally it calls backend api and move to next page (home page i.e., http://localhost:8080/fe/appInstances).
Now if I refresh the page, it gives me the following 404 Whitelabel Error Page.
I understand that since this is springboot as it is looking for definition of http://localhost:8080/fe/appInstances api end point in the java code.
To fix this, I have created the following IndexController.java class which should redirect all the frontend rest end points to index.html which is present at main/resources/static folder.
IndexController.java
@Controller
public class IndexController {
@GetMapping("/")
public String index() {
return "redirect:/index";
}
@GetMapping("/fe/*")
public String anyFrontEndApi() {
return "index";
}
}
But now, I get the following Whilelabel error page about Circular view path [index]: would dispatch back to the current handler URL [/fe/index] again.
I have tried Changing @Controller to @RestController and changing the return type to ModelandView something like this. But irrespective of all, it is still giving me the Whitelable Error Page about Cicular view path...
@RestController
public class IndexController {
@GetMapping("/")
public String index() {
return "redirect:/index";
}
@GetMapping("/fe/*")
public ModelAndView anyFrontEndApi() {
ModelAndView mv = new ModelAndView();
mv.setViewName("index");
return mv;
}
}
Am I missing something here? Can someone please suggest me a fix for this?
PS: @justthink addressed this situation here. But I don't know how to do reserve proxy way.
|
[
"We had this situation of page refresh for Angular and Springboot and we resolved this by adding the below Configuration class extending WebMvcConfigurerAdapter\n@Configuration\npublic class WebMvcConfig extends WebMvcConfigurerAdapter {\n @Override\n public void addResourceHandlers(ResourceHandlerRegistry registry) {\n registry.addResourceHandler(\"/**/*\")\n .addResourceLocations(\"classpath:/static/\")\n .resourceChain(true)\n .addResolver(new PathResourceResolver() {\n @Override\n protected Resource getResource(String resourcePath, Resource location) throws IOException {\n Resource requestedResource = location.createRelative(resourcePath);\n return requestedResource.exists() && requestedResource.isReadable() ? requestedResource\n : new ClassPathResource(\"/static/index.html\");\n }\n });\n }\n}\n\nSo basically, we are telling Springboot that if we have the resource, use the same if not then redirect it to index.html.\nNow, to handle the path in Angular, it depends on how you would have written your routes. If the path is available, you show the page, if not, display 404 page.\nHope this helps.\nUpdate 1:\nWebMvcConfigurerAdapter is deprecated. If this causes any trouble, then instead of extending the class WebMvcConfigurerAdapter, you can implement WebMvcConfigurer\n",
"If you see the whitelabel error says that \"this application has no explicit mapping for /error\".\nThat means if no path is matched with the paths that are defined in controller mappings, it forwards the request to \"/error\" route. So we can override this default behaviour.\nSpring provides ErrorController interface to override this functionality\nimport org.springframework.boot.web.servlet.error.ErrorController;\nimport org.springframework.stereotype.Controller;\nimport org.springframework.web.bind.annotation.RequestMapping;\n\n@Controller\npublic class CustomErrorController implements ErrorController {\n\n @RequestMapping(\"/error\")\n public String handleError() {\n return \"forward:/\";\n }\n}\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"angular",
"gradle",
"java",
"spring",
"spring_boot"
] |
stackoverflow_0074569409_angular_gradle_java_spring_spring_boot.txt
|
Q:
Expansion of variables inside single quotes in a command in Bash
I want to run a command from a bash script which has single quotes and some other commands inside the single quotes and a variable.
e.g. repo forall -c '....$variable'
In this format, $ is escaped and the variable is not expanded.
I tried the following variations but they were rejected:
repo forall -c '...."$variable" '
repo forall -c " '....$variable' "
" repo forall -c '....$variable' "
repo forall -c "'" ....$variable "'"
If I substitute the value in place of the variable the command is executed just fine.
Please tell me where am I going wrong.
A:
Inside single quotes everything is preserved literally, without exception.
That means you have to close the quotes, insert something, and then re-enter again.
'before'"$variable"'after'
'before'"'"'after'
'before'\''after'
Word concatenation is simply done by juxtaposition. As you can verify, each of the above lines is a single word to the shell. Quotes (single or double quotes, depending on the situation) don't isolate words. They are only used to disable interpretation of various special characters, like whitespace, $, ;... For a good tutorial on quoting see Mark Reed's answer. Also relevant: Which characters need to be escaped in bash?
Do not concatenate strings interpreted by a shell
You should absolutely avoid building shell commands by concatenating variables. This is a bad idea similar to concatenation of SQL fragments (SQL injection!).
Usually it is possible to have placeholders in the command, and to supply the command together with variables so that the callee can receive them from the invocation arguments list.
For example, the following is very unsafe. DON'T DO THIS
script="echo \"Argument 1 is: $myvar\""
/bin/sh -c "$script"
If the contents of $myvar is untrusted, here is an exploit:
myvar='foo"; echo "you were hacked'
Instead of the above invocation, use positional arguments. The following invocation is better -- it's not exploitable:
script='echo "arg 1 is: $1"'
/bin/sh -c "$script" -- "$myvar"
Note the use of single ticks in the assignment to script, which means that it's taken literally, without variable expansion or any other form of interpretation.
A:
The repo command can't care what kind of quotes it gets. If you need parameter expansion, use double quotes. If that means you wind up having to backslash a lot of stuff, use single quotes for most of it, and then break out of them and go into doubles for the part where you need the expansion to happen.
repo forall -c 'literal stuff goes here; '"stuff with $parameters here"' more literal stuff'
Explanation follows, if you're interested.
When you run a command from the shell, what that command receives as arguments is an array of null-terminated strings. Those strings may contain absolutely any non-null character.
But when the shell is building that array of strings from a command line, it interprets some characters specially; this is designed to make commands easier (indeed, possible) to type. For instance, spaces normally indicate the boundary between strings in the array; for that reason, the individual arguments are sometimes called "words". But an argument may nonetheless have spaces in it; you just need some way to tell the shell that's what you want.
You can use a backslash in front of any character (including space, or another backslash) to tell the shell to treat that character literally. But while you can do something like this:
reply=\”That\'ll\ be\ \$4.96,\ please,\"\ said\ the\ cashier
...it can get tiresome. So the shell offers an alternative: quotation marks. These come in two main varieties.
Double-quotation marks are called "grouping quotes". They prevent wildcards and aliases from being expanded, but mostly they're for including spaces in a word. Other things like parameter and command expansion (the sorts of thing signaled by a $) still happen. And of course if you want a literal double-quote inside double-quotes, you have to backslash it:
reply="\"That'll be \$4.96, please,\" said the cashier"
Single-quotation marks are more draconian. Everything between them is taken completely literally, including backslashes. There is absolutely no way to get a literal single quote inside single quotes.
Fortunately, quotation marks in the shell are not word delimiters; by themselves, they don't terminate a word. You can go in and out of quotes, including between different types of quotes, within the same word to get the desired result:
reply='"That'\''ll be $4.96, please," said the cashier'
So that's easier - a lot fewer backslashes, although the close-single-quote, backslashed-literal-single-quote, open-single-quote sequence takes some getting used to.
Modern shells have added another quoting style not specified by the POSIX standard, in which the leading single quotation mark is prefixed with a dollar sign. Strings so quoted follow similar conventions to string literals in the ANSI standard version of the C programming language, and are therefore sometimes called "ANSI strings" and the $'...' pair "ANSI quotes". Within such strings, the above advice about backslashes being taken literally no longer applies. Instead, they become special again - not only can you include a literal single quotation mark or backslash by prepending a backslash to it, but the shell also expands the ANSI C character escapes (like \n for a newline, \t for tab, and \xHH for the character with hexadecimal code HH). Otherwise, however, they behave as single-quoted strings: no parameter or command substitution takes place:
reply=$'"That\'ll be $4.96, please," said the cashier'
The important thing to note is that the single string that gets stored in the reply variable is exactly the same in all of these examples. Similarly, after the shell is done parsing a command line, there is no way for the command being run to tell exactly how each argument string was actually typed – or even if it was typed, rather than being created programmatically somehow.
A:
Below is what worked for me -
QUOTE="'"
hive -e "alter table TBL_NAME set location $QUOTE$TBL_HDFS_DIR_PATH$QUOTE"
A:
EDIT: (As per the comments in question:)
I've been looking into this since then. I was lucky enough that I had repo laying around. Still it's not clear to me whether you need to enclose your commands between single quotes by force. I looked into the repo syntax and I don't think you need to. You could used double quotes around your command, and then use whatever single and double quotes you need inside provided you escape double ones.
A:
just use printf
instead of
repo forall -c '....$variable'
use printf to replace the variable token with the expanded variable.
For example:
template='.... %s'
repo forall -c $(printf "${template}" "${variable}")
A:
Variables can contain single quotes.
myvar=\'....$variable\'
repo forall -c $myvar
A:
I was wondering why I could never get my awk statement to print from an ssh session so I found this forum. Nothing here helped me directly but if anyone is having an issue similar to below, then give me an up vote. It seems any sort of single or double quotes were just not helping, but then I didn't try everything.
check_var="df -h / | awk 'FNR==2{print $3}'"
getckvar=$(ssh user@host "$check_var")
echo $getckvar
What do you get? A load of nothing.
Fix: escape \$3, in your print function.
|
Expansion of variables inside single quotes in a command in Bash
|
I want to run a command from a bash script which has single quotes and some other commands inside the single quotes and a variable.
e.g. repo forall -c '....$variable'
In this format, $ is escaped and the variable is not expanded.
I tried the following variations but they were rejected:
repo forall -c '...."$variable" '
repo forall -c " '....$variable' "
" repo forall -c '....$variable' "
repo forall -c "'" ....$variable "'"
If I substitute the value in place of the variable the command is executed just fine.
Please tell me where am I going wrong.
|
[
"Inside single quotes everything is preserved literally, without exception.\nThat means you have to close the quotes, insert something, and then re-enter again.\n'before'\"$variable\"'after'\n'before'\"'\"'after'\n'before'\\''after'\n\nWord concatenation is simply done by juxtaposition. As you can verify, each of the above lines is a single word to the shell. Quotes (single or double quotes, depending on the situation) don't isolate words. They are only used to disable interpretation of various special characters, like whitespace, $, ;... For a good tutorial on quoting see Mark Reed's answer. Also relevant: Which characters need to be escaped in bash?\nDo not concatenate strings interpreted by a shell\nYou should absolutely avoid building shell commands by concatenating variables. This is a bad idea similar to concatenation of SQL fragments (SQL injection!).\nUsually it is possible to have placeholders in the command, and to supply the command together with variables so that the callee can receive them from the invocation arguments list.\nFor example, the following is very unsafe. DON'T DO THIS\nscript=\"echo \\\"Argument 1 is: $myvar\\\"\"\n/bin/sh -c \"$script\"\n\nIf the contents of $myvar is untrusted, here is an exploit:\nmyvar='foo\"; echo \"you were hacked'\n\nInstead of the above invocation, use positional arguments. The following invocation is better -- it's not exploitable:\nscript='echo \"arg 1 is: $1\"'\n/bin/sh -c \"$script\" -- \"$myvar\"\n\nNote the use of single ticks in the assignment to script, which means that it's taken literally, without variable expansion or any other form of interpretation.\n",
"The repo command can't care what kind of quotes it gets. If you need parameter expansion, use double quotes. If that means you wind up having to backslash a lot of stuff, use single quotes for most of it, and then break out of them and go into doubles for the part where you need the expansion to happen.\nrepo forall -c 'literal stuff goes here; '\"stuff with $parameters here\"' more literal stuff'\n\nExplanation follows, if you're interested.\nWhen you run a command from the shell, what that command receives as arguments is an array of null-terminated strings. Those strings may contain absolutely any non-null character.\nBut when the shell is building that array of strings from a command line, it interprets some characters specially; this is designed to make commands easier (indeed, possible) to type. For instance, spaces normally indicate the boundary between strings in the array; for that reason, the individual arguments are sometimes called \"words\". But an argument may nonetheless have spaces in it; you just need some way to tell the shell that's what you want.\nYou can use a backslash in front of any character (including space, or another backslash) to tell the shell to treat that character literally. But while you can do something like this:\nreply=\\”That\\'ll\\ be\\ \\$4.96,\\ please,\\\"\\ said\\ the\\ cashier\n\n...it can get tiresome. So the shell offers an alternative: quotation marks. These come in two main varieties.\nDouble-quotation marks are called \"grouping quotes\". They prevent wildcards and aliases from being expanded, but mostly they're for including spaces in a word. Other things like parameter and command expansion (the sorts of thing signaled by a $) still happen. And of course if you want a literal double-quote inside double-quotes, you have to backslash it:\nreply=\"\\\"That'll be \\$4.96, please,\\\" said the cashier\"\n\nSingle-quotation marks are more draconian. Everything between them is taken completely literally, including backslashes. There is absolutely no way to get a literal single quote inside single quotes.\nFortunately, quotation marks in the shell are not word delimiters; by themselves, they don't terminate a word. You can go in and out of quotes, including between different types of quotes, within the same word to get the desired result:\nreply='\"That'\\''ll be $4.96, please,\" said the cashier'\n\nSo that's easier - a lot fewer backslashes, although the close-single-quote, backslashed-literal-single-quote, open-single-quote sequence takes some getting used to.\nModern shells have added another quoting style not specified by the POSIX standard, in which the leading single quotation mark is prefixed with a dollar sign. Strings so quoted follow similar conventions to string literals in the ANSI standard version of the C programming language, and are therefore sometimes called \"ANSI strings\" and the $'...' pair \"ANSI quotes\". Within such strings, the above advice about backslashes being taken literally no longer applies. Instead, they become special again - not only can you include a literal single quotation mark or backslash by prepending a backslash to it, but the shell also expands the ANSI C character escapes (like \\n for a newline, \\t for tab, and \\xHH for the character with hexadecimal code HH). Otherwise, however, they behave as single-quoted strings: no parameter or command substitution takes place:\nreply=$'\"That\\'ll be $4.96, please,\" said the cashier'\n\nThe important thing to note is that the single string that gets stored in the reply variable is exactly the same in all of these examples. Similarly, after the shell is done parsing a command line, there is no way for the command being run to tell exactly how each argument string was actually typed – or even if it was typed, rather than being created programmatically somehow.\n",
"Below is what worked for me -\nQUOTE=\"'\"\nhive -e \"alter table TBL_NAME set location $QUOTE$TBL_HDFS_DIR_PATH$QUOTE\"\n\n",
"EDIT: (As per the comments in question:)\nI've been looking into this since then. I was lucky enough that I had repo laying around. Still it's not clear to me whether you need to enclose your commands between single quotes by force. I looked into the repo syntax and I don't think you need to. You could used double quotes around your command, and then use whatever single and double quotes you need inside provided you escape double ones.\n",
"just use printf\ninstead of \nrepo forall -c '....$variable'\n\nuse printf to replace the variable token with the expanded variable.\nFor example:\ntemplate='.... %s'\n\nrepo forall -c $(printf \"${template}\" \"${variable}\")\n\n",
"Variables can contain single quotes.\nmyvar=\\'....$variable\\'\n\nrepo forall -c $myvar\n\n",
"I was wondering why I could never get my awk statement to print from an ssh session so I found this forum. Nothing here helped me directly but if anyone is having an issue similar to below, then give me an up vote. It seems any sort of single or double quotes were just not helping, but then I didn't try everything.\ncheck_var=\"df -h / | awk 'FNR==2{print $3}'\"\ngetckvar=$(ssh user@host \"$check_var\")\necho $getckvar\nWhat do you get? A load of nothing.\nFix: escape \\$3, in your print function.\n"
] |
[
925,
129,
4,
2,
1,
0,
0
] |
[
"Does this work for you?\neval repo forall -c '....$variable'\n\n"
] |
[
-4
] |
[
"bash",
"quotes",
"shell",
"variables"
] |
stackoverflow_0013799789_bash_quotes_shell_variables.txt
|
Q:
Queries and Mutation not getting update at /graphql
I have created multiple queries and mutations using apollo server. But they are not getting updated at http://localhost:8081/graphql.
Can anyone help me with this?
A:
Hi and welcome to Stack Overflow. One of the things we will need you to do is provide more specific information about your set up. Apollo Server v4 is the latest version and will set up your server at port 4000 by default. Try navigating to localhost:4000/graphql instead. If that doesn't work, you can find the "Getting Started" documentation here.
If you have other specific information about your setup that differs from the documentation on that page, please update the question and I can circle back on this.
Hope that helps and happy coding :)
|
Queries and Mutation not getting update at /graphql
|
I have created multiple queries and mutations using apollo server. But they are not getting updated at http://localhost:8081/graphql.
Can anyone help me with this?
|
[
"Hi and welcome to Stack Overflow. One of the things we will need you to do is provide more specific information about your set up. Apollo Server v4 is the latest version and will set up your server at port 4000 by default. Try navigating to localhost:4000/graphql instead. If that doesn't work, you can find the \"Getting Started\" documentation here.\nIf you have other specific information about your setup that differs from the documentation on that page, please update the question and I can circle back on this.\nHope that helps and happy coding :)\n"
] |
[
0
] |
[] |
[] |
[
"apollo_server",
"express",
"graphql",
"node.js"
] |
stackoverflow_0074637450_apollo_server_express_graphql_node.js.txt
|
Q:
Show Sidebar Navigation only at Home using React-Router-Dom V6
I have this solution but, How to Achieve this in V6 ?
export default function App() {
return (
<div className="app">
<Router>
<Switch>
<Route exact path="/home">
{' '}
{/* Here */}
<SideBar />
<Home />
</Route>
<Route exact path="/search">
<Search />
</Route>
<Route exact path="/foo">
<Foo />
</Route>
</Switch>
</Router>
</div>
);
}
As switch is gone, Routes are not allowing to wrap the element directly.
A:
With the new v6 of react-router, you can use Routes component instead of Switch. See the official documentation
Tho have content inside Route you need to pass content to element prop.
Also exact is not supported anymore since there is no need for it now, the router matches that path directly:
export default function App() {
return (
<div className="app">
<Router>
<Routes>
<Route
path="/home"
element={
<>
{/* Here */}
<SideBar />
<Home />
</>
}
></Route>
<Route path="/search" element={<Search />}></Route>
<Route path="/foo" element={<Foo />}></Route>
</Routes>
</Router>
</div>
);
}
A:
A little late. But. Maybe the option with "direct" injection of the sidebar will not suit you. Best practice, without SideBar excessive re-render/lifecycle:
<Routes>
<Route element={<SideBar />}
<Route
path="/home"
element={<Home />}
/>
/** ... some other routes with sidebar here ... */
</Route>
<Route path="/search" element={<Search />}></Route>
<Route path="/foo" element={<Foo />}></Route>
</Routes>
|
Show Sidebar Navigation only at Home using React-Router-Dom V6
|
I have this solution but, How to Achieve this in V6 ?
export default function App() {
return (
<div className="app">
<Router>
<Switch>
<Route exact path="/home">
{' '}
{/* Here */}
<SideBar />
<Home />
</Route>
<Route exact path="/search">
<Search />
</Route>
<Route exact path="/foo">
<Foo />
</Route>
</Switch>
</Router>
</div>
);
}
As switch is gone, Routes are not allowing to wrap the element directly.
|
[
"With the new v6 of react-router, you can use Routes component instead of Switch. See the official documentation\nTho have content inside Route you need to pass content to element prop.\nAlso exact is not supported anymore since there is no need for it now, the router matches that path directly:\nexport default function App() {\n return (\n <div className=\"app\">\n <Router>\n <Routes>\n <Route\n path=\"/home\"\n element={\n <>\n {/* Here */}\n <SideBar />\n <Home />\n </>\n }\n ></Route>\n <Route path=\"/search\" element={<Search />}></Route>\n <Route path=\"/foo\" element={<Foo />}></Route>\n </Routes>\n </Router>\n </div>\n );\n}\n\n",
"A little late. But. Maybe the option with \"direct\" injection of the sidebar will not suit you. Best practice, without SideBar excessive re-render/lifecycle:\n <Routes>\n <Route element={<SideBar />}\n <Route\n path=\"/home\"\n element={<Home />}\n />\n /** ... some other routes with sidebar here ... */\n </Route>\n\n <Route path=\"/search\" element={<Search />}></Route>\n <Route path=\"/foo\" element={<Foo />}></Route>\n </Routes>\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"react_router",
"reactjs"
] |
stackoverflow_0070685330_react_router_reactjs.txt
|
Q:
Docker cannot push image to private registry | received unexpected HTTP status: 200 OK
I have GitHub Runner and docker registry on my server and I'm trying to set up ci/cd for some applications.
Directly pushing to the container name with port works smoothly
docker pull alpine
docker tag alpine:latest docker-registry:5000/alpine:latest
docker login docker-registry:5000
Login Succeeded
docker push docker-registry:5000/alpine:latest
The push refers to repository [docker-registry:5000/alpine]
ded7a220bb05: Pushed
But I get an problem when I use the domain
docker pull alpine
docker tag alpine:latest registry.mydomain.com/alpine:latest
docker login registry.mydomain.com
Login Succeeded
docker push registry.mydomain.com/alpine:latest
The push refers to repository [registry.mydomain.com/alpine]
ded7a220bb05: Retrying in 1 second
received unexpected HTTP status: 200 OK
My docker containers
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
eab92ef5610b registry:2 "/entrypoint.sh /etc…" 22 seconds ago Up 21 seconds 5000/tcp docker-registry
bdd4f0886c19 nginx:1.20.2 "/docker-entrypoint.…" 3 months ago Up About an hour 0.0.0.0:80->80/tcp, :::80->80/tcp nginx-ingress
5e7e8e1b591e myoung34/github-runner:2.299.1-ubuntu-jammy "/entrypoint.sh ./bi…" 37 hours ago Up 27 minutes github-runner
My Nginx config
upstream registry {
server docker-registry:5000;
}
map $upstream_http_docker_distribution_api_version $docker_distribution_api_version {
'' 'registry/2.0';
}
server {
listen 443;
server_name registry.mydomain.com;
client_max_body_size 0;
chunked_transfer_encoding on;
location /v2/ {
if ($http_user_agent ~ "^(docker\/1\.(3|4|5(?!\.[0-9]-dev))|Go ).*$" ) {
return 404;
}
auth_basic "Registry Realm";
auth_basic_user_file /etc/nginx/conf.d/nginx.htpasswd;
add_header 'Docker-Distribution-Api-Version' $docker_distribution_api_version always;
proxy_pass http://registry;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme; # default => $scheme | cloudflare => https
proxy_read_timeout 900;
}
}
Docker registry logs
time="2022-12-01T13:05:21.576274727Z" level=warning msg="No HTTP secret provided - generated random secret. This may cause problems with uploads if multiple registries are behind a load-balancer. To provide a shared secret, fill in http.secret in the configuration file or set the REGISTRY_HTTP_SECRET environment variable." go.version=go1.16.15 instance.id=9fae9ec3-fe4f-443e-b6b3-d472db92412a service=registry version="v2.8.1+unknown"
time="2022-12-01T13:05:21.576362865Z" level=info msg="redis not configured" go.version=go1.16.15 instance.id=9fae9ec3-fe4f-443e-b6b3-d472db92412a service=registry version="v2.8.1+unknown"
time="2022-12-01T13:05:21.576419048Z" level=info msg="Starting upload purge in 9m0s" go.version=go1.16.15 instance.id=9fae9ec3-fe4f-443e-b6b3-d472db92412a service=registry version="v2.8.1+unknown"
time="2022-12-01T13:05:21.588466586Z" level=info msg="using inmemory blob descriptor cache" go.version=go1.16.15 instance.id=9fae9ec3-fe4f-443e-b6b3-d472db92412a service=registry version="v2.8.1+unknown"
time="2022-12-01T13:05:21.588770043Z" level=info msg="listening on [::]:5000" go.version=go1.16.15 instance.id=9fae9ec3-fe4f-443e-b6b3-d472db92412a service=registry version="v2.8.1+unknown"
172.18.0.1 - - [01/Dec/2022:13:05:28 +0000] "GET /v2/ HTTP/1.1" 401 87 "" "docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \\(linux\\))"
time="2022-12-01T13:05:28.906573279Z" level=warning msg="error authorizing context: basic authentication challenge for realm "Registry Realm": invalid authorization credential" go.version=go1.16.15 http.request.host="docker-registry:5000" http.request.id=4d19514d-336c-4c1c-ad8a-91a8a4be1e0c http.request.method=GET http.request.remoteaddr="172.18.0.1:42346" http.request.uri="/v2/" http.request.useragent="docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \(linux\))"
172.18.0.1 - - [01/Dec/2022:13:05:28 +0000] "GET /v2/ HTTP/1.1" 200 2 "" "docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \\(linux\\))"
time="2022-12-01T13:05:28.91635475Z" level=info msg="authorized request" go.version=go1.16.15 http.request.host="docker-registry:5000" http.request.id=6f0cd72e-52dd-4408-812c-c770c1daeb4c http.request.method=GET http.request.remoteaddr="172.18.0.1:42348" http.request.uri="/v2/" http.request.useragent="docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \(linux\))"
time="2022-12-01T13:05:28.916428075Z" level=info msg="response completed" go.version=go1.16.15 http.request.host="docker-registry:5000" http.request.id=6f0cd72e-52dd-4408-812c-c770c1daeb4c http.request.method=GET http.request.remoteaddr="172.18.0.1:42348" http.request.uri="/v2/" http.request.useragent="docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \(linux\))" http.response.contenttype="application/json; charset=utf-8" http.response.duration=8.872969ms http.response.status=200 http.response.written=2
172.18.0.1 - - [01/Dec/2022:13:05:47 +0000] "GET /v2/ HTTP/1.1" 401 87 "" "docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \\(linux\\))"
time="2022-12-01T13:05:47.955166053Z" level=warning msg="error authorizing context: basic authentication challenge for realm "Registry Realm": invalid authorization credential" go.version=go1.16.15 http.request.host="docker-registry:5000" http.request.id=060b8ad8-027d-4e1f-aae8-853e694aa071 http.request.method=GET http.request.remoteaddr="172.18.0.1:42352" http.request.uri="/v2/" http.request.useragent="docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \(linux\))"
time="2022-12-01T13:05:47.961400261Z" level=info msg="authorized request" go.version=go1.16.15 http.request.host="docker-registry:5000" http.request.id=23ab1f44-c687-41a4-854c-88d5223f3c19 http.request.method=HEAD http.request.remoteaddr="172.18.0.1:42354" http.request.uri="/v2/alpine/blobs/sha256:c158987b05517b6f2c5913f3acef1f2182a32345a304fe357e3ace5fadcad715" http.request.useragent="docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \(linux\))" vars.digest="sha256:c158987b05517b6f2c5913f3acef1f2182a32345a304fe357e3ace5fadcad715" vars.name=alpine
time="2022-12-01T13:05:47.961616963Z" level=error msg="response completed with error" auth.user.name="docker_user" err.code="blob unknown" err.detail=sha256:c158987b05517b6f2c5913f3acef1f2182a32345a304fe357e3ace5fadcad715 err.message="blob unknown to registry" go.version=go1.16.15 http.request.host="docker-registry:5000" http.request.id=23ab1f44-c687-41a4-854c-88d5223f3c19 http.request.method=HEAD http.request.remoteaddr="172.18.0.1:42354" http.request.uri="/v2/alpine/blobs/sha256:c158987b05517b6f2c5913f3acef1f2182a32345a304fe357e3ace5fadcad715" http.request.useragent="docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \(linux\))" http.response.contenttype="application/json; charset=utf-8" http.response.duration=3.888659ms http.response.status=404 http.response.written=157 vars.digest="sha256:c158987b05517b6f2c5913f3acef1f2182a32345a304fe357e3ace5fadcad715" vars.name=alpine
172.18.0.1 - - [01/Dec/2022:13:05:47 +0000] "HEAD /v2/alpine/blobs/sha256:c158987b05517b6f2c5913f3acef1f2182a32345a304fe357e3ace5fadcad715 HTTP/1.1" 404 157 "" "docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \\(linux\\))"
time="2022-12-01T13:05:48.013676521Z" level=info msg="authorized request" go.version=go1.16.15 http.request.host="docker-registry:5000" http.request.id=76e692cf-3fec-4739-ac68-dd7e785bc3c0 http.request.method=POST http.request.remoteaddr="172.18.0.1:42356" http.request.uri="/v2/alpine/blobs/uploads/" http.request.useragent="docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \(linux\))" vars.name=alpine
time="2022-12-01T13:05:48.026046059Z" level=info msg="response completed" go.version=go1.16.15 http.request.host="docker-registry:5000" http.request.id=76e692cf-3fec-4739-ac68-dd7e785bc3c0 http.request.method=POST http.request.remoteaddr="172.18.0.1:42356" http.request.uri="/v2/alpine/blobs/uploads/" http.request.useragent="docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \(linux\))" http.response.duration=16.091457ms http.response.status=202 http.response.written=0
172.18.0.1 - - [01/Dec/2022:13:05:48 +0000] "POST /v2/alpine/blobs/uploads/ HTTP/1.1" 202 0 "" "docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \\(linux\\))"
time="2022-12-01T13:05:48.037049651Z" level=info msg="authorized request" go.version=go1.16.15 http.request.host="docker-registry:5000" http.request.id=9d35eb65-a80e-43c7-bcad-42bcd0723ad1 http.request.method=PATCH http.request.remoteaddr="172.18.0.1:42358" http.request.uri="/v2/alpine/blobs/uploads/cb21fc83-6e19-4c78-b5c0-badb0ff8fe04?_state=TRS7R1oNENXqtiEnnSLbJD3l9O9Pq9tEtZudktNkQQF7Ik5hbWUiOiJhbHBpbmUiLCJVVUlEIjoiY2IyMWZjODMtNmUxOS00Yzc4LWI1YzAtYmFkYjBmZjhmZTA0IiwiT2Zmc2V0IjowLCJTdGFydGVkQXQiOiIyMDIyLTEyLTAxVDEzOjA1OjQ4LjAxMzc0MDcxN1oifQ%3D%3D" http.request.useragent="docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \(linux\))" vars.name=alpine vars.uuid=cb21fc83-6e19-4c78-b5c0-badb0ff8fe04
time="2022-12-01T13:05:48.956206497Z" level=info msg="response completed" go.version=go1.16.15 http.request.host="docker-registry:5000" http.request.id=9d35eb65-a80e-43c7-bcad-42bcd0723ad1 http.request.method=PATCH http.request.remoteaddr="172.18.0.1:42358" http.request.uri="/v2/alpine/blobs/uploads/cb21fc83-6e19-4c78-b5c0-badb0ff8fe04?_state=TRS7R1oNENXqtiEnnSLbJD3l9O9Pq9tEtZudktNkQQF7Ik5hbWUiOiJhbHBpbmUiLCJVVUlEIjoiY2IyMWZjODMtNmUxOS00Yzc4LWI1YzAtYmFkYjBmZjhmZTA0IiwiT2Zmc2V0IjowLCJTdGFydGVkQXQiOiIyMDIyLTEyLTAxVDEzOjA1OjQ4LjAxMzc0MDcxN1oifQ%3D%3D" http.request.useragent="docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \(linux\))" http.response.duration=929.13763ms http.response.status=202 http.response.written=0
172.18.0.1 - - [01/Dec/2022:13:05:48 +0000] "PATCH /v2/alpine/blobs/uploads/cb21fc83-6e19-4c78-b5c0-badb0ff8fe04?_state=TRS7R1oNENXqtiEnnSLbJD3l9O9Pq9tEtZudktNkQQF7Ik5hbWUiOiJhbHBpbmUiLCJVVUlEIjoiY2IyMWZjODMtNmUxOS00Yzc4LWI1YzAtYmFkYjBmZjhmZTA0IiwiT2Zmc2V0IjowLCJTdGFydGVkQXQiOiIyMDIyLTEyLTAxVDEzOjA1OjQ4LjAxMzc0MDcxN1oifQ%3D%3D HTTP/1.1" 202 0 "" "docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \\(linux\\))"
time="2022-12-01T13:05:48.960962747Z" level=info msg="authorized request" go.version=go1.16.15 http.request.host="docker-registry:5000" http.request.id=b4955223-ae69-4d0f-8643-b70148b764fd http.request.method=PUT http.request.remoteaddr="172.18.0.1:42360" http.request.uri="/v2/alpine/blobs/uploads/cb21fc83-6e19-4c78-b5c0-badb0ff8fe04?_state=NVqJ2JVvha-PdWnWeVonnGI1qxrWbyJiE3ECtgBK_v17Ik5hbWUiOiJhbHBpbmUiLCJVVUlEIjoiY2IyMWZjODMtNmUxOS00Yzc4LWI1YzAtYmFkYjBmZjhmZTA0IiwiT2Zmc2V0IjozMzcwNzA2LCJTdGFydGVkQXQiOiIyMDIyLTEyLTAxVDEzOjA1OjQ4WiJ9&digest=sha256%3Ac158987b05517b6f2c5913f3acef1f2182a32345a304fe357e3ace5fadcad715" http.request.useragent="docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \(linux\))" vars.name=alpine vars.uuid=cb21fc83-6e19-4c78-b5c0-badb0ff8fe04
time="2022-12-01T13:05:48.975845667Z" level=info msg="response completed" go.version=go1.16.15 http.request.host="docker-registry:5000" http.request.id=b4955223-ae69-4d0f-8643-b70148b764fd http.request.method=PUT http.request.remoteaddr="172.18.0.1:42360" http.request.uri="/v2/alpine/blobs/uploads/cb21fc83-6e19-4c78-b5c0-badb0ff8fe04?_state=NVqJ2JVvha-PdWnWeVonnGI1qxrWbyJiE3ECtgBK_v17Ik5hbWUiOiJhbHBpbmUiLCJVVUlEIjoiY2IyMWZjODMtNmUxOS00Yzc4LWI1YzAtYmFkYjBmZjhmZTA0IiwiT2Zmc2V0IjozMzcwNzA2LCJTdGFydGVkQXQiOiIyMDIyLTEyLTAxVDEzOjA1OjQ4WiJ9&digest=sha256%3Ac158987b05517b6f2c5913f3acef1f2182a32345a304fe357e3ace5fadcad715" http.request.useragent="docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \(linux\))" http.response.duration=18.779158ms http.response.status=201 http.response.written=0
172.18.0.1 - - [01/Dec/2022:13:05:48 +0000] "PUT /v2/alpine/blobs/uploads/cb21fc83-6e19-4c78-b5c0-badb0ff8fe04?_state=NVqJ2JVvha-PdWnWeVonnGI1qxrWbyJiE3ECtgBK_v17Ik5hbWUiOiJhbHBpbmUiLCJVVUlEIjoiY2IyMWZjODMtNmUxOS00Yzc4LWI1YzAtYmFkYjBmZjhmZTA0IiwiT2Zmc2V0IjozMzcwNzA2LCJTdGFydGVkQXQiOiIyMDIyLTEyLTAxVDEzOjA1OjQ4WiJ9&digest=sha256%3Ac158987b05517b6f2c5913f3acef1f2182a32345a304fe357e3ace5fadcad715 HTTP/1.1" 201 0 "" "docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \\(linux\\))"
time="2022-12-01T13:05:48.987290535Z" level=info msg="authorized request" go.version=go1.16.15 http.request.host="docker-registry:5000" http.request.id=53c874f2-6f55-4cde-b5ca-596dda13583a http.request.method=HEAD http.request.remoteaddr="172.18.0.1:42362" http.request.uri="/v2/alpine/blobs/sha256:c158987b05517b6f2c5913f3acef1f2182a32345a304fe357e3ace5fadcad715" http.request.useragent="docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \(linux\))" vars.digest="sha256:c158987b05517b6f2c5913f3acef1f2182a32345a304fe357e3ace5fadcad715" vars.name=alpine
time="2022-12-01T13:05:48.987671994Z" level=info msg="response completed" go.version=go1.16.15 http.request.host="docker-registry:5000" http.request.id=53c874f2-6f55-4cde-b5ca-596dda13583a http.request.method=HEAD http.request.remoteaddr="172.18.0.1:42362" http.request.uri="/v2/alpine/blobs/sha256:c158987b05517b6f2c5913f3acef1f2182a32345a304fe357e3ace5fadcad715" http.request.useragent="docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \(linux\))" http.response.contenttype="application/octet-stream" http.response.duration=10.691772ms http.response.status=200 http.response.written=0
172.18.0.1 - - [01/Dec/2022:13:05:48 +0000] "HEAD /v2/alpine/blobs/sha256:c158987b05517b6f2c5913f3acef1f2182a32345a304fe357e3ace5fadcad715 HTTP/1.1" 200 0 "" "docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \\(linux\\))"
time="2022-12-01T13:05:48.99996587Z" level=info msg="authorized request" go.version=go1.16.15 http.request.host="docker-registry:5000" http.request.id=867604cb-09cd-43f4-91b0-52ead32d664e http.request.method=HEAD http.request.remoteaddr="172.18.0.1:42364" http.request.uri="/v2/alpine/blobs/sha256:49176f190c7e9cdb51ac85ab6c6d5e4512352218190cd69b08e6fd803ffbf3da" http.request.useragent="docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \(linux\))" vars.digest="sha256:49176f190c7e9cdb51ac85ab6c6d5e4512352218190cd69b08e6fd803ffbf3da" vars.name=alpine
time="2022-12-01T13:05:49.000171622Z" level=error msg="response completed with error" auth.user.name="docker_user" err.code="blob unknown" err.detail=sha256:49176f190c7e9cdb51ac85ab6c6d5e4512352218190cd69b08e6fd803ffbf3da err.message="blob unknown to registry" go.version=go1.16.15 http.request.host="docker-registry:5000" http.request.id=867604cb-09cd-43f4-91b0-52ead32d664e http.request.method=HEAD http.request.remoteaddr="172.18.0.1:42364" http.request.uri="/v2/alpine/blobs/sha256:49176f190c7e9cdb51ac85ab6c6d5e4512352218190cd69b08e6fd803ffbf3da" http.request.useragent="docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \(linux\))" http.response.contenttype="application/json; charset=utf-8" http.response.duration=4.875849ms http.response.status=404 http.response.written=157 vars.digest="sha256:49176f190c7e9cdb51ac85ab6c6d5e4512352218190cd69b08e6fd803ffbf3da" vars.name=alpine
172.18.0.1 - - [01/Dec/2022:13:05:48 +0000] "HEAD /v2/alpine/blobs/sha256:49176f190c7e9cdb51ac85ab6c6d5e4512352218190cd69b08e6fd803ffbf3da HTTP/1.1" 404 157 "" "docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \\(linux\\))"
time="2022-12-01T13:05:49.009848946Z" level=info msg="authorized request" go.version=go1.16.15 http.request.host="docker-registry:5000" http.request.id=7e29e9ea-0864-44e2-bc75-732d246af8a9 http.request.method=POST http.request.remoteaddr="172.18.0.1:42366" http.request.uri="/v2/alpine/blobs/uploads/" http.request.useragent="docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \(linux\))" vars.name=alpine
172.18.0.1 - - [01/Dec/2022:13:05:49 +0000] "POST /v2/alpine/blobs/uploads/ HTTP/1.1" 202 0 "" "docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \\(linux\\))"
time="2022-12-01T13:05:49.022631954Z" level=info msg="response completed" go.version=go1.16.15 http.request.host="docker-registry:5000" http.request.id=7e29e9ea-0864-44e2-bc75-732d246af8a9 http.request.method=POST http.request.remoteaddr="172.18.0.1:42366" http.request.uri="/v2/alpine/blobs/uploads/" http.request.useragent="docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \(linux\))" http.response.duration=21.407075ms http.response.status=202 http.response.written=0
time="2022-12-01T13:05:49.032911791Z" level=info msg="authorized request" go.version=go1.16.15 http.request.host="docker-registry:5000" http.request.id=abab2c41-40ee-4c76-8456-3b30b62061af http.request.method=PATCH http.request.remoteaddr="172.18.0.1:42370" http.request.uri="/v2/alpine/blobs/uploads/32b60b00-82d6-4003-a470-7502f715be61?_state=E8EC15n0xy4YiaxJAynDeAl0SaoKMX0tvtbqakyT13R7Ik5hbWUiOiJhbHBpbmUiLCJVVUlEIjoiMzJiNjBiMDAtODJkNi00MDAzLWE0NzAtNzUwMmY3MTViZTYxIiwiT2Zmc2V0IjowLCJTdGFydGVkQXQiOiIyMDIyLTEyLTAxVDEzOjA1OjQ5LjAwOTk2MjU3WiJ9" http.request.useragent="docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \(linux\))" vars.name=alpine vars.uuid=32b60b00-82d6-4003-a470-7502f715be61
172.18.0.1 - - [01/Dec/2022:13:05:49 +0000] "PATCH /v2/alpine/blobs/uploads/32b60b00-82d6-4003-a470-7502f715be61?_state=E8EC15n0xy4YiaxJAynDeAl0SaoKMX0tvtbqakyT13R7Ik5hbWUiOiJhbHBpbmUiLCJVVUlEIjoiMzJiNjBiMDAtODJkNi00MDAzLWE0NzAtNzUwMmY3MTViZTYxIiwiT2Zmc2V0IjowLCJTdGFydGVkQXQiOiIyMDIyLTEyLTAxVDEzOjA1OjQ5LjAwOTk2MjU3WiJ9 HTTP/1.1" 202 0 "" "docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \\(linux\\))"
time="2022-12-01T13:05:49.046509563Z" level=info msg="response completed" go.version=go1.16.15 http.request.host="docker-registry:5000" http.request.id=abab2c41-40ee-4c76-8456-3b30b62061af http.request.method=PATCH http.request.remoteaddr="172.18.0.1:42370" http.request.uri="/v2/alpine/blobs/uploads/32b60b00-82d6-4003-a470-7502f715be61?_state=E8EC15n0xy4YiaxJAynDeAl0SaoKMX0tvtbqakyT13R7Ik5hbWUiOiJhbHBpbmUiLCJVVUlEIjoiMzJiNjBiMDAtODJkNi00MDAzLWE0NzAtNzUwMmY3MTViZTYxIiwiT2Zmc2V0IjowLCJTdGFydGVkQXQiOiIyMDIyLTEyLTAxVDEzOjA1OjQ5LjAwOTk2MjU3WiJ9" http.request.useragent="docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \(linux\))" http.response.duration=22.857284ms http.response.status=202 http.response.written=0
time="2022-12-01T13:05:49.052387956Z" level=info msg="authorized request" go.version=go1.16.15 http.request.host="docker-registry:5000" http.request.id=8d17d425-4b89-4597-9314-1f2c35e1afdb http.request.method=PUT http.request.remoteaddr="172.18.0.1:42372" http.request.uri="/v2/alpine/blobs/uploads/32b60b00-82d6-4003-a470-7502f715be61?_state=uN8teJhzBMwlrLnstzPV492DSwZgwtADfHJWpqUsGpx7Ik5hbWUiOiJhbHBpbmUiLCJVVUlEIjoiMzJiNjBiMDAtODJkNi00MDAzLWE0NzAtNzUwMmY3MTViZTYxIiwiT2Zmc2V0IjoxNDcyLCJTdGFydGVkQXQiOiIyMDIyLTEyLTAxVDEzOjA1OjQ5WiJ9&digest=sha256%3A49176f190c7e9cdb51ac85ab6c6d5e4512352218190cd69b08e6fd803ffbf3da" http.request.useragent="docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \(linux\))" vars.name=alpine vars.uuid=32b60b00-82d6-4003-a470-7502f715be61
time="2022-12-01T13:05:49.067768248Z" level=info msg="response completed" go.version=go1.16.15 http.request.host="docker-registry:5000" http.request.id=8d17d425-4b89-4597-9314-1f2c35e1afdb http.request.method=PUT http.request.remoteaddr="172.18.0.1:42372" http.request.uri="/v2/alpine/blobs/uploads/32b60b00-82d6-4003-a470-7502f715be61?_state=uN8teJhzBMwlrLnstzPV492DSwZgwtADfHJWpqUsGpx7Ik5hbWUiOiJhbHBpbmUiLCJVVUlEIjoiMzJiNjBiMDAtODJkNi00MDAzLWE0NzAtNzUwMmY3MTViZTYxIiwiT2Zmc2V0IjoxNDcyLCJTdGFydGVkQXQiOiIyMDIyLTEyLTAxVDEzOjA1OjQ5WiJ9&digest=sha256%3A49176f190c7e9cdb51ac85ab6c6d5e4512352218190cd69b08e6fd803ffbf3da" http.request.useragent="docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \(linux\))" http.response.duration=20.176783ms http.response.status=201 http.response.written=0
172.18.0.1 - - [01/Dec/2022:13:05:49 +0000] "PUT /v2/alpine/blobs/uploads/32b60b00-82d6-4003-a470-7502f715be61?_state=uN8teJhzBMwlrLnstzPV492DSwZgwtADfHJWpqUsGpx7Ik5hbWUiOiJhbHBpbmUiLCJVVUlEIjoiMzJiNjBiMDAtODJkNi00MDAzLWE0NzAtNzUwMmY3MTViZTYxIiwiT2Zmc2V0IjoxNDcyLCJTdGFydGVkQXQiOiIyMDIyLTEyLTAxVDEzOjA1OjQ5WiJ9&digest=sha256%3A49176f190c7e9cdb51ac85ab6c6d5e4512352218190cd69b08e6fd803ffbf3da HTTP/1.1" 201 0 "" "docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \\(linux\\))"
time="2022-12-01T13:05:49.073531198Z" level=info msg="authorized request" go.version=go1.16.15 http.request.host="docker-registry:5000" http.request.id=d77e1f84-f55d-4917-bc8c-aceb4606c23e http.request.method=HEAD http.request.remoteaddr="172.18.0.1:42374" http.request.uri="/v2/alpine/blobs/sha256:49176f190c7e9cdb51ac85ab6c6d5e4512352218190cd69b08e6fd803ffbf3da" http.request.useragent="docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \(linux\))" vars.digest="sha256:49176f190c7e9cdb51ac85ab6c6d5e4512352218190cd69b08e6fd803ffbf3da" vars.name=alpine
172.18.0.1 - - [01/Dec/2022:13:05:49 +0000] "HEAD /v2/alpine/blobs/sha256:49176f190c7e9cdb51ac85ab6c6d5e4512352218190cd69b08e6fd803ffbf3da HTTP/1.1" 200 0 "" "docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \\(linux\\))"
time="2022-12-01T13:05:49.073748384Z" level=info msg="response completed" go.version=go1.16.15 http.request.host="docker-registry:5000" http.request.id=d77e1f84-f55d-4917-bc8c-aceb4606c23e http.request.method=HEAD http.request.remoteaddr="172.18.0.1:42374" http.request.uri="/v2/alpine/blobs/sha256:49176f190c7e9cdb51ac85ab6c6d5e4512352218190cd69b08e6fd803ffbf3da" http.request.useragent="docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \(linux\))" http.response.contenttype="application/octet-stream" http.response.duration=4.891873ms http.response.status=200 http.response.written=0
time="2022-12-01T13:05:49.081141753Z" level=info msg="authorized request" go.version=go1.16.15 http.request.contenttype="application/vnd.docker.distribution.manifest.v2+json" http.request.host="docker-registry:5000" http.request.id=b2329d48-38dd-471c-8691-6b08c1eb7ec5 http.request.method=PUT http.request.remoteaddr="172.18.0.1:42376" http.request.uri="/v2/alpine/manifests/latest" http.request.useragent="docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \(linux\))" vars.name=alpine vars.reference=latest
time="2022-12-01T13:05:49.102368205Z" level=info msg="response completed" go.version=go1.16.15 http.request.contenttype="application/vnd.docker.distribution.manifest.v2+json" http.request.host="docker-registry:5000" http.request.id=b2329d48-38dd-471c-8691-6b08c1eb7ec5 http.request.method=PUT http.request.remoteaddr="172.18.0.1:42376" http.request.uri="/v2/alpine/manifests/latest" http.request.useragent="docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \(linux\))" http.response.duration=27.598063ms http.response.status=201 http.response.written=0
172.18.0.1 - - [01/Dec/2022:13:05:49 +0000] "PUT /v2/alpine/manifests/latest HTTP/1.1" 201 0 "" "docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \\(linux\\))"
Cloudflare is used for SSL. It automatically redirects to 443
A:
From the comments:
Is your proxy configuration dropping the /v2 from the requests to the backend?
Yes, it does
That's an issue. The /v2 is part of the API. I believe the following will include the /v2 in the proxied requests:
proxy_pass http://registry/v2/;
A:
I solved the problem
One of the main problems was that I was only listening to requests from port 443. This was wrong because my Cloudflare's encryption mode was "Flexible". In this mode, all connections between Cloudflare and your origin are made through HTTP
I changed 443 to 80, location, and the value of "proxy_set_header X-Forwarded-Proto" with "https", problem solved ))
server {
listen 80;
....
....
location / {
....
proxy_set_header X-Forwarded-Proto https;
....
}
}
|
Docker cannot push image to private registry | received unexpected HTTP status: 200 OK
|
I have GitHub Runner and docker registry on my server and I'm trying to set up ci/cd for some applications.
Directly pushing to the container name with port works smoothly
docker pull alpine
docker tag alpine:latest docker-registry:5000/alpine:latest
docker login docker-registry:5000
Login Succeeded
docker push docker-registry:5000/alpine:latest
The push refers to repository [docker-registry:5000/alpine]
ded7a220bb05: Pushed
But I get an problem when I use the domain
docker pull alpine
docker tag alpine:latest registry.mydomain.com/alpine:latest
docker login registry.mydomain.com
Login Succeeded
docker push registry.mydomain.com/alpine:latest
The push refers to repository [registry.mydomain.com/alpine]
ded7a220bb05: Retrying in 1 second
received unexpected HTTP status: 200 OK
My docker containers
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
eab92ef5610b registry:2 "/entrypoint.sh /etc…" 22 seconds ago Up 21 seconds 5000/tcp docker-registry
bdd4f0886c19 nginx:1.20.2 "/docker-entrypoint.…" 3 months ago Up About an hour 0.0.0.0:80->80/tcp, :::80->80/tcp nginx-ingress
5e7e8e1b591e myoung34/github-runner:2.299.1-ubuntu-jammy "/entrypoint.sh ./bi…" 37 hours ago Up 27 minutes github-runner
My Nginx config
upstream registry {
server docker-registry:5000;
}
map $upstream_http_docker_distribution_api_version $docker_distribution_api_version {
'' 'registry/2.0';
}
server {
listen 443;
server_name registry.mydomain.com;
client_max_body_size 0;
chunked_transfer_encoding on;
location /v2/ {
if ($http_user_agent ~ "^(docker\/1\.(3|4|5(?!\.[0-9]-dev))|Go ).*$" ) {
return 404;
}
auth_basic "Registry Realm";
auth_basic_user_file /etc/nginx/conf.d/nginx.htpasswd;
add_header 'Docker-Distribution-Api-Version' $docker_distribution_api_version always;
proxy_pass http://registry;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme; # default => $scheme | cloudflare => https
proxy_read_timeout 900;
}
}
Docker registry logs
time="2022-12-01T13:05:21.576274727Z" level=warning msg="No HTTP secret provided - generated random secret. This may cause problems with uploads if multiple registries are behind a load-balancer. To provide a shared secret, fill in http.secret in the configuration file or set the REGISTRY_HTTP_SECRET environment variable." go.version=go1.16.15 instance.id=9fae9ec3-fe4f-443e-b6b3-d472db92412a service=registry version="v2.8.1+unknown"
time="2022-12-01T13:05:21.576362865Z" level=info msg="redis not configured" go.version=go1.16.15 instance.id=9fae9ec3-fe4f-443e-b6b3-d472db92412a service=registry version="v2.8.1+unknown"
time="2022-12-01T13:05:21.576419048Z" level=info msg="Starting upload purge in 9m0s" go.version=go1.16.15 instance.id=9fae9ec3-fe4f-443e-b6b3-d472db92412a service=registry version="v2.8.1+unknown"
time="2022-12-01T13:05:21.588466586Z" level=info msg="using inmemory blob descriptor cache" go.version=go1.16.15 instance.id=9fae9ec3-fe4f-443e-b6b3-d472db92412a service=registry version="v2.8.1+unknown"
time="2022-12-01T13:05:21.588770043Z" level=info msg="listening on [::]:5000" go.version=go1.16.15 instance.id=9fae9ec3-fe4f-443e-b6b3-d472db92412a service=registry version="v2.8.1+unknown"
172.18.0.1 - - [01/Dec/2022:13:05:28 +0000] "GET /v2/ HTTP/1.1" 401 87 "" "docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \\(linux\\))"
time="2022-12-01T13:05:28.906573279Z" level=warning msg="error authorizing context: basic authentication challenge for realm "Registry Realm": invalid authorization credential" go.version=go1.16.15 http.request.host="docker-registry:5000" http.request.id=4d19514d-336c-4c1c-ad8a-91a8a4be1e0c http.request.method=GET http.request.remoteaddr="172.18.0.1:42346" http.request.uri="/v2/" http.request.useragent="docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \(linux\))"
172.18.0.1 - - [01/Dec/2022:13:05:28 +0000] "GET /v2/ HTTP/1.1" 200 2 "" "docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \\(linux\\))"
time="2022-12-01T13:05:28.91635475Z" level=info msg="authorized request" go.version=go1.16.15 http.request.host="docker-registry:5000" http.request.id=6f0cd72e-52dd-4408-812c-c770c1daeb4c http.request.method=GET http.request.remoteaddr="172.18.0.1:42348" http.request.uri="/v2/" http.request.useragent="docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \(linux\))"
time="2022-12-01T13:05:28.916428075Z" level=info msg="response completed" go.version=go1.16.15 http.request.host="docker-registry:5000" http.request.id=6f0cd72e-52dd-4408-812c-c770c1daeb4c http.request.method=GET http.request.remoteaddr="172.18.0.1:42348" http.request.uri="/v2/" http.request.useragent="docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \(linux\))" http.response.contenttype="application/json; charset=utf-8" http.response.duration=8.872969ms http.response.status=200 http.response.written=2
172.18.0.1 - - [01/Dec/2022:13:05:47 +0000] "GET /v2/ HTTP/1.1" 401 87 "" "docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \\(linux\\))"
time="2022-12-01T13:05:47.955166053Z" level=warning msg="error authorizing context: basic authentication challenge for realm "Registry Realm": invalid authorization credential" go.version=go1.16.15 http.request.host="docker-registry:5000" http.request.id=060b8ad8-027d-4e1f-aae8-853e694aa071 http.request.method=GET http.request.remoteaddr="172.18.0.1:42352" http.request.uri="/v2/" http.request.useragent="docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \(linux\))"
time="2022-12-01T13:05:47.961400261Z" level=info msg="authorized request" go.version=go1.16.15 http.request.host="docker-registry:5000" http.request.id=23ab1f44-c687-41a4-854c-88d5223f3c19 http.request.method=HEAD http.request.remoteaddr="172.18.0.1:42354" http.request.uri="/v2/alpine/blobs/sha256:c158987b05517b6f2c5913f3acef1f2182a32345a304fe357e3ace5fadcad715" http.request.useragent="docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \(linux\))" vars.digest="sha256:c158987b05517b6f2c5913f3acef1f2182a32345a304fe357e3ace5fadcad715" vars.name=alpine
time="2022-12-01T13:05:47.961616963Z" level=error msg="response completed with error" auth.user.name="docker_user" err.code="blob unknown" err.detail=sha256:c158987b05517b6f2c5913f3acef1f2182a32345a304fe357e3ace5fadcad715 err.message="blob unknown to registry" go.version=go1.16.15 http.request.host="docker-registry:5000" http.request.id=23ab1f44-c687-41a4-854c-88d5223f3c19 http.request.method=HEAD http.request.remoteaddr="172.18.0.1:42354" http.request.uri="/v2/alpine/blobs/sha256:c158987b05517b6f2c5913f3acef1f2182a32345a304fe357e3ace5fadcad715" http.request.useragent="docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \(linux\))" http.response.contenttype="application/json; charset=utf-8" http.response.duration=3.888659ms http.response.status=404 http.response.written=157 vars.digest="sha256:c158987b05517b6f2c5913f3acef1f2182a32345a304fe357e3ace5fadcad715" vars.name=alpine
172.18.0.1 - - [01/Dec/2022:13:05:47 +0000] "HEAD /v2/alpine/blobs/sha256:c158987b05517b6f2c5913f3acef1f2182a32345a304fe357e3ace5fadcad715 HTTP/1.1" 404 157 "" "docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \\(linux\\))"
time="2022-12-01T13:05:48.013676521Z" level=info msg="authorized request" go.version=go1.16.15 http.request.host="docker-registry:5000" http.request.id=76e692cf-3fec-4739-ac68-dd7e785bc3c0 http.request.method=POST http.request.remoteaddr="172.18.0.1:42356" http.request.uri="/v2/alpine/blobs/uploads/" http.request.useragent="docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \(linux\))" vars.name=alpine
time="2022-12-01T13:05:48.026046059Z" level=info msg="response completed" go.version=go1.16.15 http.request.host="docker-registry:5000" http.request.id=76e692cf-3fec-4739-ac68-dd7e785bc3c0 http.request.method=POST http.request.remoteaddr="172.18.0.1:42356" http.request.uri="/v2/alpine/blobs/uploads/" http.request.useragent="docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \(linux\))" http.response.duration=16.091457ms http.response.status=202 http.response.written=0
172.18.0.1 - - [01/Dec/2022:13:05:48 +0000] "POST /v2/alpine/blobs/uploads/ HTTP/1.1" 202 0 "" "docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \\(linux\\))"
time="2022-12-01T13:05:48.037049651Z" level=info msg="authorized request" go.version=go1.16.15 http.request.host="docker-registry:5000" http.request.id=9d35eb65-a80e-43c7-bcad-42bcd0723ad1 http.request.method=PATCH http.request.remoteaddr="172.18.0.1:42358" http.request.uri="/v2/alpine/blobs/uploads/cb21fc83-6e19-4c78-b5c0-badb0ff8fe04?_state=TRS7R1oNENXqtiEnnSLbJD3l9O9Pq9tEtZudktNkQQF7Ik5hbWUiOiJhbHBpbmUiLCJVVUlEIjoiY2IyMWZjODMtNmUxOS00Yzc4LWI1YzAtYmFkYjBmZjhmZTA0IiwiT2Zmc2V0IjowLCJTdGFydGVkQXQiOiIyMDIyLTEyLTAxVDEzOjA1OjQ4LjAxMzc0MDcxN1oifQ%3D%3D" http.request.useragent="docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \(linux\))" vars.name=alpine vars.uuid=cb21fc83-6e19-4c78-b5c0-badb0ff8fe04
time="2022-12-01T13:05:48.956206497Z" level=info msg="response completed" go.version=go1.16.15 http.request.host="docker-registry:5000" http.request.id=9d35eb65-a80e-43c7-bcad-42bcd0723ad1 http.request.method=PATCH http.request.remoteaddr="172.18.0.1:42358" http.request.uri="/v2/alpine/blobs/uploads/cb21fc83-6e19-4c78-b5c0-badb0ff8fe04?_state=TRS7R1oNENXqtiEnnSLbJD3l9O9Pq9tEtZudktNkQQF7Ik5hbWUiOiJhbHBpbmUiLCJVVUlEIjoiY2IyMWZjODMtNmUxOS00Yzc4LWI1YzAtYmFkYjBmZjhmZTA0IiwiT2Zmc2V0IjowLCJTdGFydGVkQXQiOiIyMDIyLTEyLTAxVDEzOjA1OjQ4LjAxMzc0MDcxN1oifQ%3D%3D" http.request.useragent="docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \(linux\))" http.response.duration=929.13763ms http.response.status=202 http.response.written=0
172.18.0.1 - - [01/Dec/2022:13:05:48 +0000] "PATCH /v2/alpine/blobs/uploads/cb21fc83-6e19-4c78-b5c0-badb0ff8fe04?_state=TRS7R1oNENXqtiEnnSLbJD3l9O9Pq9tEtZudktNkQQF7Ik5hbWUiOiJhbHBpbmUiLCJVVUlEIjoiY2IyMWZjODMtNmUxOS00Yzc4LWI1YzAtYmFkYjBmZjhmZTA0IiwiT2Zmc2V0IjowLCJTdGFydGVkQXQiOiIyMDIyLTEyLTAxVDEzOjA1OjQ4LjAxMzc0MDcxN1oifQ%3D%3D HTTP/1.1" 202 0 "" "docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \\(linux\\))"
time="2022-12-01T13:05:48.960962747Z" level=info msg="authorized request" go.version=go1.16.15 http.request.host="docker-registry:5000" http.request.id=b4955223-ae69-4d0f-8643-b70148b764fd http.request.method=PUT http.request.remoteaddr="172.18.0.1:42360" http.request.uri="/v2/alpine/blobs/uploads/cb21fc83-6e19-4c78-b5c0-badb0ff8fe04?_state=NVqJ2JVvha-PdWnWeVonnGI1qxrWbyJiE3ECtgBK_v17Ik5hbWUiOiJhbHBpbmUiLCJVVUlEIjoiY2IyMWZjODMtNmUxOS00Yzc4LWI1YzAtYmFkYjBmZjhmZTA0IiwiT2Zmc2V0IjozMzcwNzA2LCJTdGFydGVkQXQiOiIyMDIyLTEyLTAxVDEzOjA1OjQ4WiJ9&digest=sha256%3Ac158987b05517b6f2c5913f3acef1f2182a32345a304fe357e3ace5fadcad715" http.request.useragent="docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \(linux\))" vars.name=alpine vars.uuid=cb21fc83-6e19-4c78-b5c0-badb0ff8fe04
time="2022-12-01T13:05:48.975845667Z" level=info msg="response completed" go.version=go1.16.15 http.request.host="docker-registry:5000" http.request.id=b4955223-ae69-4d0f-8643-b70148b764fd http.request.method=PUT http.request.remoteaddr="172.18.0.1:42360" http.request.uri="/v2/alpine/blobs/uploads/cb21fc83-6e19-4c78-b5c0-badb0ff8fe04?_state=NVqJ2JVvha-PdWnWeVonnGI1qxrWbyJiE3ECtgBK_v17Ik5hbWUiOiJhbHBpbmUiLCJVVUlEIjoiY2IyMWZjODMtNmUxOS00Yzc4LWI1YzAtYmFkYjBmZjhmZTA0IiwiT2Zmc2V0IjozMzcwNzA2LCJTdGFydGVkQXQiOiIyMDIyLTEyLTAxVDEzOjA1OjQ4WiJ9&digest=sha256%3Ac158987b05517b6f2c5913f3acef1f2182a32345a304fe357e3ace5fadcad715" http.request.useragent="docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \(linux\))" http.response.duration=18.779158ms http.response.status=201 http.response.written=0
172.18.0.1 - - [01/Dec/2022:13:05:48 +0000] "PUT /v2/alpine/blobs/uploads/cb21fc83-6e19-4c78-b5c0-badb0ff8fe04?_state=NVqJ2JVvha-PdWnWeVonnGI1qxrWbyJiE3ECtgBK_v17Ik5hbWUiOiJhbHBpbmUiLCJVVUlEIjoiY2IyMWZjODMtNmUxOS00Yzc4LWI1YzAtYmFkYjBmZjhmZTA0IiwiT2Zmc2V0IjozMzcwNzA2LCJTdGFydGVkQXQiOiIyMDIyLTEyLTAxVDEzOjA1OjQ4WiJ9&digest=sha256%3Ac158987b05517b6f2c5913f3acef1f2182a32345a304fe357e3ace5fadcad715 HTTP/1.1" 201 0 "" "docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \\(linux\\))"
time="2022-12-01T13:05:48.987290535Z" level=info msg="authorized request" go.version=go1.16.15 http.request.host="docker-registry:5000" http.request.id=53c874f2-6f55-4cde-b5ca-596dda13583a http.request.method=HEAD http.request.remoteaddr="172.18.0.1:42362" http.request.uri="/v2/alpine/blobs/sha256:c158987b05517b6f2c5913f3acef1f2182a32345a304fe357e3ace5fadcad715" http.request.useragent="docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \(linux\))" vars.digest="sha256:c158987b05517b6f2c5913f3acef1f2182a32345a304fe357e3ace5fadcad715" vars.name=alpine
time="2022-12-01T13:05:48.987671994Z" level=info msg="response completed" go.version=go1.16.15 http.request.host="docker-registry:5000" http.request.id=53c874f2-6f55-4cde-b5ca-596dda13583a http.request.method=HEAD http.request.remoteaddr="172.18.0.1:42362" http.request.uri="/v2/alpine/blobs/sha256:c158987b05517b6f2c5913f3acef1f2182a32345a304fe357e3ace5fadcad715" http.request.useragent="docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \(linux\))" http.response.contenttype="application/octet-stream" http.response.duration=10.691772ms http.response.status=200 http.response.written=0
172.18.0.1 - - [01/Dec/2022:13:05:48 +0000] "HEAD /v2/alpine/blobs/sha256:c158987b05517b6f2c5913f3acef1f2182a32345a304fe357e3ace5fadcad715 HTTP/1.1" 200 0 "" "docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \\(linux\\))"
time="2022-12-01T13:05:48.99996587Z" level=info msg="authorized request" go.version=go1.16.15 http.request.host="docker-registry:5000" http.request.id=867604cb-09cd-43f4-91b0-52ead32d664e http.request.method=HEAD http.request.remoteaddr="172.18.0.1:42364" http.request.uri="/v2/alpine/blobs/sha256:49176f190c7e9cdb51ac85ab6c6d5e4512352218190cd69b08e6fd803ffbf3da" http.request.useragent="docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \(linux\))" vars.digest="sha256:49176f190c7e9cdb51ac85ab6c6d5e4512352218190cd69b08e6fd803ffbf3da" vars.name=alpine
time="2022-12-01T13:05:49.000171622Z" level=error msg="response completed with error" auth.user.name="docker_user" err.code="blob unknown" err.detail=sha256:49176f190c7e9cdb51ac85ab6c6d5e4512352218190cd69b08e6fd803ffbf3da err.message="blob unknown to registry" go.version=go1.16.15 http.request.host="docker-registry:5000" http.request.id=867604cb-09cd-43f4-91b0-52ead32d664e http.request.method=HEAD http.request.remoteaddr="172.18.0.1:42364" http.request.uri="/v2/alpine/blobs/sha256:49176f190c7e9cdb51ac85ab6c6d5e4512352218190cd69b08e6fd803ffbf3da" http.request.useragent="docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \(linux\))" http.response.contenttype="application/json; charset=utf-8" http.response.duration=4.875849ms http.response.status=404 http.response.written=157 vars.digest="sha256:49176f190c7e9cdb51ac85ab6c6d5e4512352218190cd69b08e6fd803ffbf3da" vars.name=alpine
172.18.0.1 - - [01/Dec/2022:13:05:48 +0000] "HEAD /v2/alpine/blobs/sha256:49176f190c7e9cdb51ac85ab6c6d5e4512352218190cd69b08e6fd803ffbf3da HTTP/1.1" 404 157 "" "docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \\(linux\\))"
time="2022-12-01T13:05:49.009848946Z" level=info msg="authorized request" go.version=go1.16.15 http.request.host="docker-registry:5000" http.request.id=7e29e9ea-0864-44e2-bc75-732d246af8a9 http.request.method=POST http.request.remoteaddr="172.18.0.1:42366" http.request.uri="/v2/alpine/blobs/uploads/" http.request.useragent="docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \(linux\))" vars.name=alpine
172.18.0.1 - - [01/Dec/2022:13:05:49 +0000] "POST /v2/alpine/blobs/uploads/ HTTP/1.1" 202 0 "" "docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \\(linux\\))"
time="2022-12-01T13:05:49.022631954Z" level=info msg="response completed" go.version=go1.16.15 http.request.host="docker-registry:5000" http.request.id=7e29e9ea-0864-44e2-bc75-732d246af8a9 http.request.method=POST http.request.remoteaddr="172.18.0.1:42366" http.request.uri="/v2/alpine/blobs/uploads/" http.request.useragent="docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \(linux\))" http.response.duration=21.407075ms http.response.status=202 http.response.written=0
time="2022-12-01T13:05:49.032911791Z" level=info msg="authorized request" go.version=go1.16.15 http.request.host="docker-registry:5000" http.request.id=abab2c41-40ee-4c76-8456-3b30b62061af http.request.method=PATCH http.request.remoteaddr="172.18.0.1:42370" http.request.uri="/v2/alpine/blobs/uploads/32b60b00-82d6-4003-a470-7502f715be61?_state=E8EC15n0xy4YiaxJAynDeAl0SaoKMX0tvtbqakyT13R7Ik5hbWUiOiJhbHBpbmUiLCJVVUlEIjoiMzJiNjBiMDAtODJkNi00MDAzLWE0NzAtNzUwMmY3MTViZTYxIiwiT2Zmc2V0IjowLCJTdGFydGVkQXQiOiIyMDIyLTEyLTAxVDEzOjA1OjQ5LjAwOTk2MjU3WiJ9" http.request.useragent="docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \(linux\))" vars.name=alpine vars.uuid=32b60b00-82d6-4003-a470-7502f715be61
172.18.0.1 - - [01/Dec/2022:13:05:49 +0000] "PATCH /v2/alpine/blobs/uploads/32b60b00-82d6-4003-a470-7502f715be61?_state=E8EC15n0xy4YiaxJAynDeAl0SaoKMX0tvtbqakyT13R7Ik5hbWUiOiJhbHBpbmUiLCJVVUlEIjoiMzJiNjBiMDAtODJkNi00MDAzLWE0NzAtNzUwMmY3MTViZTYxIiwiT2Zmc2V0IjowLCJTdGFydGVkQXQiOiIyMDIyLTEyLTAxVDEzOjA1OjQ5LjAwOTk2MjU3WiJ9 HTTP/1.1" 202 0 "" "docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \\(linux\\))"
time="2022-12-01T13:05:49.046509563Z" level=info msg="response completed" go.version=go1.16.15 http.request.host="docker-registry:5000" http.request.id=abab2c41-40ee-4c76-8456-3b30b62061af http.request.method=PATCH http.request.remoteaddr="172.18.0.1:42370" http.request.uri="/v2/alpine/blobs/uploads/32b60b00-82d6-4003-a470-7502f715be61?_state=E8EC15n0xy4YiaxJAynDeAl0SaoKMX0tvtbqakyT13R7Ik5hbWUiOiJhbHBpbmUiLCJVVUlEIjoiMzJiNjBiMDAtODJkNi00MDAzLWE0NzAtNzUwMmY3MTViZTYxIiwiT2Zmc2V0IjowLCJTdGFydGVkQXQiOiIyMDIyLTEyLTAxVDEzOjA1OjQ5LjAwOTk2MjU3WiJ9" http.request.useragent="docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \(linux\))" http.response.duration=22.857284ms http.response.status=202 http.response.written=0
time="2022-12-01T13:05:49.052387956Z" level=info msg="authorized request" go.version=go1.16.15 http.request.host="docker-registry:5000" http.request.id=8d17d425-4b89-4597-9314-1f2c35e1afdb http.request.method=PUT http.request.remoteaddr="172.18.0.1:42372" http.request.uri="/v2/alpine/blobs/uploads/32b60b00-82d6-4003-a470-7502f715be61?_state=uN8teJhzBMwlrLnstzPV492DSwZgwtADfHJWpqUsGpx7Ik5hbWUiOiJhbHBpbmUiLCJVVUlEIjoiMzJiNjBiMDAtODJkNi00MDAzLWE0NzAtNzUwMmY3MTViZTYxIiwiT2Zmc2V0IjoxNDcyLCJTdGFydGVkQXQiOiIyMDIyLTEyLTAxVDEzOjA1OjQ5WiJ9&digest=sha256%3A49176f190c7e9cdb51ac85ab6c6d5e4512352218190cd69b08e6fd803ffbf3da" http.request.useragent="docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \(linux\))" vars.name=alpine vars.uuid=32b60b00-82d6-4003-a470-7502f715be61
time="2022-12-01T13:05:49.067768248Z" level=info msg="response completed" go.version=go1.16.15 http.request.host="docker-registry:5000" http.request.id=8d17d425-4b89-4597-9314-1f2c35e1afdb http.request.method=PUT http.request.remoteaddr="172.18.0.1:42372" http.request.uri="/v2/alpine/blobs/uploads/32b60b00-82d6-4003-a470-7502f715be61?_state=uN8teJhzBMwlrLnstzPV492DSwZgwtADfHJWpqUsGpx7Ik5hbWUiOiJhbHBpbmUiLCJVVUlEIjoiMzJiNjBiMDAtODJkNi00MDAzLWE0NzAtNzUwMmY3MTViZTYxIiwiT2Zmc2V0IjoxNDcyLCJTdGFydGVkQXQiOiIyMDIyLTEyLTAxVDEzOjA1OjQ5WiJ9&digest=sha256%3A49176f190c7e9cdb51ac85ab6c6d5e4512352218190cd69b08e6fd803ffbf3da" http.request.useragent="docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \(linux\))" http.response.duration=20.176783ms http.response.status=201 http.response.written=0
172.18.0.1 - - [01/Dec/2022:13:05:49 +0000] "PUT /v2/alpine/blobs/uploads/32b60b00-82d6-4003-a470-7502f715be61?_state=uN8teJhzBMwlrLnstzPV492DSwZgwtADfHJWpqUsGpx7Ik5hbWUiOiJhbHBpbmUiLCJVVUlEIjoiMzJiNjBiMDAtODJkNi00MDAzLWE0NzAtNzUwMmY3MTViZTYxIiwiT2Zmc2V0IjoxNDcyLCJTdGFydGVkQXQiOiIyMDIyLTEyLTAxVDEzOjA1OjQ5WiJ9&digest=sha256%3A49176f190c7e9cdb51ac85ab6c6d5e4512352218190cd69b08e6fd803ffbf3da HTTP/1.1" 201 0 "" "docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \\(linux\\))"
time="2022-12-01T13:05:49.073531198Z" level=info msg="authorized request" go.version=go1.16.15 http.request.host="docker-registry:5000" http.request.id=d77e1f84-f55d-4917-bc8c-aceb4606c23e http.request.method=HEAD http.request.remoteaddr="172.18.0.1:42374" http.request.uri="/v2/alpine/blobs/sha256:49176f190c7e9cdb51ac85ab6c6d5e4512352218190cd69b08e6fd803ffbf3da" http.request.useragent="docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \(linux\))" vars.digest="sha256:49176f190c7e9cdb51ac85ab6c6d5e4512352218190cd69b08e6fd803ffbf3da" vars.name=alpine
172.18.0.1 - - [01/Dec/2022:13:05:49 +0000] "HEAD /v2/alpine/blobs/sha256:49176f190c7e9cdb51ac85ab6c6d5e4512352218190cd69b08e6fd803ffbf3da HTTP/1.1" 200 0 "" "docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \\(linux\\))"
time="2022-12-01T13:05:49.073748384Z" level=info msg="response completed" go.version=go1.16.15 http.request.host="docker-registry:5000" http.request.id=d77e1f84-f55d-4917-bc8c-aceb4606c23e http.request.method=HEAD http.request.remoteaddr="172.18.0.1:42374" http.request.uri="/v2/alpine/blobs/sha256:49176f190c7e9cdb51ac85ab6c6d5e4512352218190cd69b08e6fd803ffbf3da" http.request.useragent="docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \(linux\))" http.response.contenttype="application/octet-stream" http.response.duration=4.891873ms http.response.status=200 http.response.written=0
time="2022-12-01T13:05:49.081141753Z" level=info msg="authorized request" go.version=go1.16.15 http.request.contenttype="application/vnd.docker.distribution.manifest.v2+json" http.request.host="docker-registry:5000" http.request.id=b2329d48-38dd-471c-8691-6b08c1eb7ec5 http.request.method=PUT http.request.remoteaddr="172.18.0.1:42376" http.request.uri="/v2/alpine/manifests/latest" http.request.useragent="docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \(linux\))" vars.name=alpine vars.reference=latest
time="2022-12-01T13:05:49.102368205Z" level=info msg="response completed" go.version=go1.16.15 http.request.contenttype="application/vnd.docker.distribution.manifest.v2+json" http.request.host="docker-registry:5000" http.request.id=b2329d48-38dd-471c-8691-6b08c1eb7ec5 http.request.method=PUT http.request.remoteaddr="172.18.0.1:42376" http.request.uri="/v2/alpine/manifests/latest" http.request.useragent="docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \(linux\))" http.response.duration=27.598063ms http.response.status=201 http.response.written=0
172.18.0.1 - - [01/Dec/2022:13:05:49 +0000] "PUT /v2/alpine/manifests/latest HTTP/1.1" 201 0 "" "docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.15.0-46-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \\(linux\\))"
Cloudflare is used for SSL. It automatically redirects to 443
|
[
"From the comments:\n\n\nIs your proxy configuration dropping the /v2 from the requests to the backend?\n\nYes, it does\n\nThat's an issue. The /v2 is part of the API. I believe the following will include the /v2 in the proxied requests:\n proxy_pass http://registry/v2/;\n\n",
"I solved the problem\nOne of the main problems was that I was only listening to requests from port 443. This was wrong because my Cloudflare's encryption mode was \"Flexible\". In this mode, all connections between Cloudflare and your origin are made through HTTP\nI changed 443 to 80, location, and the value of \"proxy_set_header X-Forwarded-Proto\" with \"https\", problem solved ))\nserver {\n\n listen 80;\n ....\n\n ....\n location / {\n ....\n proxy_set_header X-Forwarded-Proto https;\n ....\n }\n\n}\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"docker_registry",
"nginx_reverse_proxy"
] |
stackoverflow_0074642283_docker_registry_nginx_reverse_proxy.txt
|