Response
stringlengths 8
2k
| Instruction
stringlengths 18
2k
| Prompt
stringlengths 14
160
|
---|---|---|
The easiest way is just to use the-OutVariableoption on Import-CsvImport-Csv -Path:$File -Delimiter:$Delimiter -OutVariable:CSVContentswill save it in$CSVContentsFrom the Import-CSV Technet page:This cmdlet supports the common parameters: -Verbose, -Debug, -ErrorAction, -ErrorVariable, -OutBuffer, and -OutVariable. For more information, see about_CommonParameters (http://go.microsoft.com/fwlink/p/?LinkID=113216).Another alternative is to use an args hash and "splat" it:$myArgs = @{
Path = "$HOME\temp\foo"
}
Get-Content @myArgsUpdate for Version 1 (untested):( IF($Header) { Import-Csv $File -delimiter $delimiter -Header $header }
ELSE { Import-Csv $File -delimiter $delimiter} ) |
#More pipeline commands here, example:
Format-TableHorrible disgusting version (untested):$ImportCommand = "Import-Csv $File -delimiter $delimiter"
If($header -ne $null) { $ImportCommand += " -header $header" }
Invoke-Expression $ImportCommand |
Format-Table
|
UPDATE: I just found out that the client I am running this on is PS V1. I can't use splatting.I have a script for processing csv files. I don't know ahead of time if the file will have a header or not so I'm prompting the user to either input a header or use the one in the file:$header = Read-Host 'Input your own header?'What I want to do is be able to check whether the header variable has been set to decide if I want to use that flag when executing the Import-CSV commandlet.I have tried a number of things along the lines of:IF($Header){Import-Csv $File -delimiter $delimiter -Header $header }
ELSE
{Import-Csv $File -delimiter $delimiter} |OrIF($Header){Import-Csv $File -delimiter $delimiter -Header $header | %{$_} }
ELSE
{Import-Csv $File -delimiter $delimiter | %{$_}}The first example results in complaints of an empty pipeline. The second results in the first column of the file just being ouptut to the console and then errors when the file is done processing because the rest of the pipeline is empty.As always any help would be appreciated.Thanks,
Bill
|
Output to Pipeline from within IF ELSE statement
|
You don't need the TPL. Change your method call to webService.UploadFile(...). What you're trying to do is synchronously upload one piece after the other. Why do you need a pipeline?
|
I want to upload a file in chunks to a web service.// Web service method:
void UploadFile(int fileId, byte[] chunk, int position, bool complete);Using .NET 4 task parallel library, I want to upload a file, one chunk at a time.I've got the byte chunks on the client, and I can upload each one just fine:List<byte[]> chunks = ...;
webService.UploadFileAsyncCompleted += OnChunkUploaded;
foreach (var chunk in chunks)
{
webService.UploadFileAsync(...);
}However, that uploads all chunks simultaneously. I want to upload each chunk, one after the other. A pipeline, if you will.How can I do this with .NET 4 task parallel library?
|
Creating a task pipeline with .NET 4?
|
When you are running aGridSearchCV, pipeline steps will be recomputed for every combination ofhyperparameters. So yes, this vectorization process will be done every time the pipeline is called.Have a look at the sklearnPipeline and composite estimators.To quote:Fitting transformers may be computationally expensive. With its memory
parameter set, Pipeline will cache each transformer after calling fit.
This feature is used to avoid computing the fit transformers within a
pipeline if the parameters and input data are identical. A typical
example is the case of a grid search in which the transformers can be
fitted only once and reused for each configuration.So you can use thememoryflag to cache the transformers.cachedir = mkdtemp()
pipe = Pipeline(estimators, memory=cachedir)
|
I have created a pipeline using sklearn so that multiple models will go through it. Since there is vectorization before fitting the model, I wonder if this vectorization is performed always before the model fitting process? If yes, maybe I should take this preprocessing out of the pipeline.log_reg = LogisticRegression()
rand_for = RandomForestClassifier()
lin_svc = LinearSVC()
svc = SVC()
# The pipeline contains both vectorization model and classifier
pipe = Pipeline(
[
('vect', tfidf),
('classifier', log_reg)
]
)
# params dictionary example
params_log_reg = {
'classifier__penalty': ['l2'],
'classifier__C': [0.01, 0.1, 1.0, 10.0, 100.0],
'classifier__class_weight': ['balanced', class_weights],
'classifier__solver': ['lbfgs', 'newton-cg'],
# 'classifier__verbose': [2],
'classifier': [log_reg]
}
params = [params_log_reg, params_rand_for, params_lin_svc, params_svc] # param dictionaries for each model
# Grid search for to combine it all
grid = GridSearchCV(
pipe,
params,
cv=skf,
scoring= 'f1_weighted')
grid.fit(features_train, labels_train[:,0])
|
Is preprocessing repeated in a Pipeline each time a new ML model is loaded?
|
With the ‘magrittr’ pipe operator you can put an operand inside{…}to prevent automatic argument substitution:c(1,3,5) %>% {ls()} %>% mean()
# NA
# Warning message:
# In mean.default(.) : argument is not numeric or logical: returning NA… but of course this serves no useful purpose.Incidentally,ls()inside a pipeline is executed in its own environment rather than the calling environment so its use here is evenlessuseful. But a different function that returned a sensible valuecouldbe used, e.g.:c(1,3,5) %>% {rnorm(10)} %>% mean()
# [1] -0.01068046Or, if you intended for the left-hand side to be passed on, skipping the intermediatels(), you could do the following:c(1,3,5) %>% {ls(); .} %>% mean()
# [1] 3… again, usingls()here won’t be meaningful but some other function that has a side-effect would work.
|
I tried searching for this but couldn't find any similar questions. Let's say, for the sake of a simple example, I want to do the following usingdplyr's pipe%>%.c(1,3,5) %>% ls() %>% mean()Setting aside what the use case would be for a pipeline like this, how can I call a function "mid-pipeline" that doesn't need any inputs coming from the left-hand side and just pass them along to the next function in the pipeline? Basically, I want to put an "intermission" or "interruption" of sorts into my pipeline, let that function do its thing, and then continue on my merry way.Obviously, the above doesn't actually work, and I know the T pipe%T>%also won't be of use here because it still expects the middle function to need inputs coming from the lhs. Are there options here shy of assigning intermediate objects and restarting the pipeline?
|
How do you call a function that takes no inputs within a pipeline?
|
However Select-Object will make a string out of itNot quite -Select-Objectwill return 0 or moreobjects, and depending on the number, the PowerShell runtime will decide whether the result is to be stored as a scalar or an array.You can force the result to be an array regardless of how many objects are in the output, by using the array sub-expression operator@():$array = @("55")
$arrayAfterUnique = @($array | Select-Object -Unique)
|
This case only appears with arrays of the length one.Example:$array = @("55")
$arrayAfterUnique = $array | Select-Object -UniqueSo I want$arrayAfterUniqueto be an array of length one.However,Select-Objectwill make a string out of it.Is there an easy workaround for this problem?
|
Select-Object -Unique returns String instead of String array
|
For a start,cat $(ls)is not the right way to go about this -cat *would be more appropriate. If the number of files is too high, you can usefindlike this:find -exec cat {} +This combines results fromfindand passes them as arguments tocat, executing as many separate instances as needed. This behaves much in the same way asxargsbut doesn't require a separate process or the use of any non-standard features like-print0, which is only supported in some versions offind.findis recursive by default, so you can specify a-maxdepth 1to prevent this if your version supports it. If there are other things in the directory, you can also filter by-type(but I guess there aren't, based on your original attempt).
|
I have a large number of files in directory - ~100k. I want to combine them and pipe them to standard output (I need that to upload them as one file elsewhere), butcat $(ls)complains that-bash: /bin/cat: Argument list too long. I know how to merge all those files into a temporary one, but can I just avoid it?
|
concat a lot of files to stdout
|
It is not linked because your launch line doesn't do it. Notice how the lamemp3enc element is not linked downstream.Update your launch line to:gst-launch filesrc location=surround.mp4 ! decodebin name=dmux ! queue ! audioconvert ! lamemp3enc ! mux. dmux. ! queue ! x264enc ! mpegtsmux name=mux ! queue ! filesink location=out.tsThe only change is " ! mux." after the lamemp3enc to tell it to link to the mpegtsmux.While you are updating things, please notice that you are using gstreamer 0.10 that is years obsolete and unmantained, please upgrade to the 1.x series to get latest improvements and bugfixes.
|
Looking for explanation how to using named elements in respect with muxing two inputs in one module. For instance muxing audio and video in one mpegtsmux modlegst-launch filesrc location=surround.mp4 ! decodebin name=dmux ! queue ! audioconvert ! lamemp3enc dmux. ! queue ! x264enc ! mpegtsmux name=mux ! queue ! filesink location=out.tsAbove pipeline gives plugins interconnection like belowSo it shows audio doesn't connect to mpegtsmus.How to modify command line to have audio and video muxedup in mpegtsmux ?Thanks!
|
Gstreamer pipeline multiple sink to one src
|
How about justbash my_script.sh > >(tee log.txt) 2>&1Also if you want to append output if log.txt already exists, add -a option to teebash my_script.sh > >(tee -a log.txt) 2>&1It's actually equivalent tobash my_script.sh 2>&1 | tee log.txtorbash my_script.sh 2>&1 | tee -a log.txt
|
I'm running a pipeline of commands that have STDERR and STDOUT outputs. I want to save both outputs in a single log file.This are my attempts to do it:bash my_script.sh > log.txt #Only save STDOUT
bash my_script.sh > >(tee log.txt) 2> >(tee log.txt >&2) #The STDERR overwrite the STDOUTI hope you can provide a simple solution to do this.
Thanks for your time!
|
How to save STDERR and STDOUT of a pipeline on a file?
|
You need to close both ends of the pipe in the parent after you fork the children. The problem is that output oflsis going to the parent, and thewcis waiting for input. So the first wait cleans up thels, but the second is waiting forwcwhich is blocked on a pipe that's not receiving data.
|
I have the following code fork()'s 2 children from a common parent and implements a pipeline between them. When I call the wait() function in the parent once only the program runs perfectly. However if I try to call the wait() function twice (to reap from both the children), the program does nothing and must be force exited.Can someone tell me why I can't wait for both children here?int main()
{
int status;
int pipeline[2];
pipe(pipeline);
pid_t pid_A, pid_B;
if( !(pid_A = fork()) )
{
dup2(pipeline[1], 1);
close(pipeline[0]);
close(pipeline[1]);
execl("/bin/ls", "ls", 0);
}
if( !(pid_B = fork()) )
{
dup2(pipeline[0], 0);
close(pipeline[0]);
close(pipeline[1]);
execl("/usr/bin/wc", "wc", 0);
}
wait(&status);
wait(&status);
}
|
fork() 2 children with pipeline, error when wait() for both
|
No, processes that are piped together have no methods of two-way communication. If the parsing is really so expensive that this is necessary (I'd guess it isn't, but profile it), then you have a two options that I can think of:Have a master program that takes options to tell it which tools to run, in which order, and then have it run a "parse" tool, followed by the requested tools (all using binary I/O), followed by an "output" tool. It wouldn't be terribly difficult to also expose the individual tools, wrapped with the parse/output tools.If users are expected to be knowledgeable enough, have each tool allow flags to tell them to expect binary input and give binary output, so that users can chain like:tool1 -o | tool2 -i -o | tool3 -i -o | tool4 -iwhere-omeans give binary output and-imeans accept binary input.
|
Say, I am writting some toolset where every single tool operates on the same textual data stream, parses it, does some operation on it and returns textual stream back using the same syntax as in the original input. The tools can be
combined (together with other unix tools/scripts/whatever) in a pipeline. Because the
textual input processing (parsing) is quite expensive, I would like to avoid it in case
two or more tools from the toolset are one right after another in the pipeline and use
binary streams instead (to store directly in a memory struct, w/o useless "extra" parsing). Is it
possible to know (using some trick, inter-process communication, or whatever else) if
the tool "before" or "after" any tool in a pipeline is part of the toolset? I guess the
unix env. is not prepared for such sort of "signalling" (AFAIK). Thanks for your ideas...
|
Identifying programs "before" and "after" program in a pipeline are from the same "toolset"
|
This will be the default behaviour if you are using sc:fld to render field values. This is legacy behaviour left from Sitecore 5 which did not replace the guids in item links.If you want to use Sitecore 6's new functionality, you must use sc:field instead
|
We're having issues inserting links into rich text in Sitecore 6.1.0. When a link to a sitecore item is inserted, it is outputted as:http://domain/~/link.aspx?_id=8A035DC067A64E2CBBE2662F6DB53BC5&_z=zRather than the actual resolved url:http://domain/path/to/page.aspxThis articleconfirms that this should be resolved in the render pipeline:in Sitecore 6 it inserts a specially
formatted link that contains the Guid
of the item you want to link to, then
when the item is rendered the special
link is replaced with the actual link
to the itemThe pipeline has the methodShortenLinksadded in web.config<convertToRuntimeHtml>
<processor type="Sitecore.Pipelines.ConvertToRuntimeHtml.PrepareHtml, Sitecore.Kernel"/>
<processor type="Sitecore.Pipelines.ConvertToRuntimeHtml.ShortenLinks, Sitecore.Kernel"/>
<processor type="Sitecore.Pipelines.ConvertToRuntimeHtml.SetImageSizes, Sitecore.Kernel"/>
<processor type="Sitecore.Pipelines.ConvertToRuntimeHtml.ConvertWebControls, Sitecore.Kernel"/>
<processor type="Sitecore.Pipelines.ConvertToRuntimeHtml.FixBullets, Sitecore.Kernel"/>
<processor type="Sitecore.Pipelines.ConvertToRuntimeHtml.FinalizeHtml, Sitecore.Kernel"/>
</convertToRuntimeHtml>So I really can't see why links are still rendering in ID format rather than as full SEO-tastic urls. Anyone got any clues?Thanks, Adam
|
Sitecore not resolving rich text editor URLS in page renders
|
Solution 1Thepipelinefunction fromstreamorreadable_streamexpects a callback as a last parameter.var gulp = require('gulp');
var uglify = require('gulp-uglify');
var pipeline = require('readable-stream').pipeline;
gulp.task('compress', function (cb) {
return pipeline(
gulp.src('DIR_NAME/*.js'),
uglify(),
gulp.dest('DIR_NAME/dist'),
cb
);
});Solution 2stream/promisesexposes a promise-based version ofpipelinewhich does not use a callback:var gulp = require('gulp');
var uglify = require('gulp-uglify');
var pipeline = require('stream/promises').pipeline;
gulp.task('compress', async function () {
await pipeline(
gulp.src('DIR_NAME/*.js'),
uglify(),
gulp.dest('DIR_NAME/dist')
);
});Solution 3Then there is the tradition way of piping steams, with the.pipe()method.var gulp = require('gulp');
var uglify = require('gulp-uglify');
gulp.task('compress', function () {
return gulp.src('DIR_NAME/*.js')
.pipe(uglify())
.pipe(gulp.dest('DIR_NAME/dist'));
});
|
I got an errorTypeError: The "streams[stream.length - 1]" property must be of type function. Received an instance of Pumpifywhile trying to minify a javascript usinggulppackage ,Using gulpfile MY_PROJECT_PATH\gulpfile.js
Starting 'compress'...
'compress' errored after 21 ms
TypeError: The "streams[stream.length - 1]" property must be of type function. Received an instance of Pumpify
at popCallback (MY_PROJECT_PATH\node_modules\readable-stream\lib\internal\streams\pipeline.js:59:3)
at pipeline (MY_PROJECT_PATH\node_modules\readable-stream\lib\internal\streams\pipeline.js:134:37This is my code in [gulpfile.js]var gulp = require('gulp');
var uglify = require('gulp-uglify');
var pipeline = require('readable-stream').pipeline;
gulp.task('compress', function () {
return pipeline(
gulp.src('DIR_NAME/*.js'),
uglify(),
gulp.dest('DIR_NAME/dist')
);
});The package.json file: I tried to install [pipeline, readable-stream, pumpify] while debugging{
"devDependencies": {
"gulp": "^4.0.2",
"gulp-uglify": "^3.0.2",
"pipeline": "^0.1.3",
"pumpify": "^2.0.1",
"readable-stream": "^4.3.0"
}
}
|
gulp generates TypeError: The Streams property must be of type function
|
grep ... |
perl -e'
defined($line = <>) && !defined(<>)
or die("Wrong number of matches\n");
print $line;
' |
xargs ...The Perl program outputs a line to STDOUT if and only if there's only one line of input. If there isn't exactly one line of input, it outputs nothing to STDOUT and an error message to STDERR.The Perl program reads as little as possible. This mean that bothperlandgrepmight end earlier and thus less use CPU and disk.The line breaks inside and outside of the Perl program can be left in or removed.
|
I want to inject a command into a pipeline that just validates the input at that point, passing it on if it's valid. If it isn't valid nothing could be passed on, or perhaps a custom-defined error message from an option or something.At the moment I'm using perl for this as in this example where I check for an expected
unique match for $1 in a file:grep -P '^\s*\Q$(strip $1)\E\s+' file_codes.txt \
| perl -e '@in = <STDIN>;' \
-e '@in == 1 or die "wrong number of matches";' \
-e 'print @in' \
| xargs \
| ...I don't like this because it seems both un-pipeish and un-perlish with the explicit read and print involving @in. It seems like there out to be something like tee or so that does it but I didn't find it.
|
command to validate input in a pipeline then pass it on?
|
On a real microprocessor it depends if you ...... use a JTAG debugger(this means some external hardware is stopping the CPU when the breakpoint is
reached)In this case it depends on the CPU what happens if you change lines "2" or "3" when the CPU is stopped in a breakpoint. There may be CPUs that read the memory again after continuing from a breakpoint and other CPUs that don't do this.I don't know how PowerPC microcontrollers (like MPC57xx) behave.However, I would guess that in most microcontrollers the hardware is (intentionally) designed in a way that the pipeline does not work "normally" in the case of a breakpoint: After reaching the breakpoint, "lines 1" to "3" are re-read from memory.... or if you are performing on-chip debugging(this means that the debugging software is running on the same CPU as the software being debugged)In this case some exception is entered when the breakpoint is entered.In the case of a PowerPC, the exception returns using anrfdior (in the case of older controllers)rfciinstruction.This means that the debugger uses therfdiinstruction to continue the program being debugged.rfdi,rfi,rfciare jump/branch instructions. After a jump/branch the CPU has to re-read the pipeline anyway.This means that the CPU will definitely even read "line 1" from memory again, so you can even modify "line 1" in the breakpoint.
|
InRISC Pipelininginstructions are insist of 5 steps.I have a question about whether pipelining could affect setting breakpoints.Example:Assume that below binary is running and$pcis atline 1line1: lwz r11 8(r31) <= PC @ here
line2: lwz r0, 0(r31)
line3: cmpwi cr7, r10, 0
line4: lwz r9, 4(r31)
line5: stw r11, 0xA0(r1)Pipeline State(My guess):As far as I know, PPC instructions has 5 state:fetch, decode, execute, memory access, write backIn this moment, I guess the pipelining would be like below. Is it correct?line1: lwz r11 8(r31) <= execution (because PC is at here)
line2: lwz r0, 0(r31) <= decode
line3: cmpwi cr7, r10, 0 <= fetch
line4: lwz r9, 4(r31)
line5: stw r11, 0xA0(r1)QuestionIs the state that I wrote correct?In this moment, is it not allowed to change instruction inline 3at runtime by debugger?(such as seting breakpoint atline 3?)
|
RISC pipeline effect on setting breakpoints?
|
I could useextendedChoiceas belowagent any
stages {
stage("Release scope") {
steps {
script {
// This list is going to come from a file, and is going to be big.
// for example purpose, I am creating a file with 3 items in it.
sh "echo \"first\nsecond\nthird\" > ${WORKSPACE}/list"
// Load the list into a variable
env.LIST = readFile("${WORKSPACE}/list").replaceAll(~/\n/, ",")
env.RELEASE_SCOPE = input message: 'User input required', ok: 'Release!',
parameters: [extendedChoice(
name: 'ArchitecturesCh',
defaultValue: "${env.BUILD_ARCHS}",
multiSelectDelimiter: ',',
type: 'PT_CHECKBOX',
value: env.LIST
)]
// Show the select input
env.RELEASE_SCOPE = input message: 'User input required', ok: 'Release!',
parameters: [choice(name: 'CHOOSE_RELEASE', choices: env.LIST, description: 'What are the choices?')]
}
echo "Release scope selected: ${env.RELEASE_SCOPE}"
}
}
}
}
|
I found out how to create input parameters dynamically from thisSO answeragent any
stages {
stage("Release scope") {
steps {
script {
// This list is going to come from a file, and is going to be big.
// for example purpose, I am creating a file with 3 items in it.
sh "echo \"first\nsecond\nthird\" > ${WORKSPACE}/list"
// Load the list into a variable
env.LIST = readFile (file: "${WORKSPACE}/list")
// Show the select input
env.RELEASE_SCOPE = input message: 'User input required', ok: 'Release!',
parameters: [choice(name: 'CHOOSE_RELEASE', choices: env.LIST, description: 'What are the choices?')]
}
echo "Release scope selected: ${env.RELEASE_SCOPE}"
}
}
}
}This allows us to choose only one as it's achoiceparameter, how to use the same list to create checkbox parameter, so the user can choose more than one as needed? e.g: if the user choosesfirstandthird, then the last echo should printRelease scope selected: first,thirdor the following is fine too, so I can iterate over and find thetrueonesRelease scope selected: {first: true, second: false, third: true}
|
How to create dynamic checkbox parameter in Jenkins pipeline?
|
Here is a.gitlab-ci.ymlfile that collects artifacts into a final artifact (takes a file generated by earlier stages, and puts them all together).The key is theneedsattribute which takes the artifacts from the earlier jobs (withartifacts: true).stages:
- stage_one
- stage_two
- generate_content
apple:
stage: stage_one
script: echo apple > apple.txt
artifacts:
paths:
- apple.txt
banana:
stage: stage_two
script: echo banana > banana.txt
artifacts:
paths:
- banana.txt
put_it_all_together:
stage: generate_content
needs:
- job: apple
artifacts: true
- job: banana
artifacts: true
script:
- cat apple.txt banana.txt > fruit.txt
artifacts:
paths:
- fruit.txt
|
I have a 3 stages in pipeline, each job in all 3 stages are creating a xml data files. These jobs which runs in parallel.I want to merge all xml data file in 4th stage. Below is my yml codestages:
- deploy
- test
- execute
- artifact
script:
- XYZ
artifacts:
name: datafile.xml
paths:
- data/Problem: how i can collect all xmls from previous jobs to merge it? Files names are unique.
|
Get artifacts from previous GIT jobs
|
Becauseheadfinished, it sends aSIGPIPEsignal totar, causing it to stop. You need to buffer the stdout until tar has finished running, e.g. usingspongefrom moreutils:filename_2=$(tar zxvf ${filename} | sponge | head -1)If you don't have sponge,tailwith a high value also generally works:filename_2=$(tar zxvf ${filename} | tail -n 10000000000 | head -1)
|
I am working on a script that needs to decompress a file and then switch to the first folder decompressed usingcdcomment.What I do is the following:filename_2=$(tar zxvf ${filename} | head -1)
cd $filename_2And works as expected, but it doesn't decompress all of the files from the tar.gz file not sure why, because if I do:filename_2=$(tar zxvf ${filename})It will decompress everything fine, but then I'm not sure how to access to the first folder resulting of the decompression.I do not understand how a|pipeline has effect on a previous command.What am I doing wrong?Thanks.
|
Combination of tar and head -1 not working as expected
|
Use the below:-def exists = fileExists '/root/elp/test.php'
if (exists) {
sh "LINUX SHELL COMMAND"
} else {
println "File doesn't exist"
}And you can followcheck-if-a-file-exists-in-jenkins-pipelineAnd you can also use below:-def exitCode = sh script: 'find -name "*.zip" | egrep .', returnStatus: true
boolean exists = exitCode == 0
|
everybody,
I have a problem with a stage in a Declaritive pipeline in Jenkins.
I'd like Jenkins to check if the directory /root/elp contains data with the name *.php If this is the case, jenkins should execute a command. If there is nothing in the folder, Jenkins should finish the job with success.My Code that not function :stage('Test Stage') {
steps {
script {
def folder = new File( '/root/elp/test.php' )
if( folder.exists() ) {
sh "LINUX SHELL COMMAND"
} else {
println "File doesn't exist"
}
}
}
|
Jenkins Pipeline file exists if else
|
There is nothing likedisablingprocessor in Sitecore out of the box.What you can do, you can create a patch config which will remove that processor. But you need to be aware that this processor will never be executed unless you change the configuration again and application is restarted.Below is example how to removeRunQueriesprocessor from thecontentSearch.queryWarmuppipeline:<configuration xmlns:patch="http://www.sitecore.net/xmlconfig/">
<sitecore>
<pipelines>
<contentSearch.queryWarmup>
<processor type="Sitecore.ContentSearch.Pipelines.QueryWarmups.RunQueries, Sitecore.ContentSearch">
<patch:delete />
</processor>
</contentSearch.queryWarmup>
</pipelines>
</sitecore>IMPORTANT:Remember that Sitecore parses all the config files alphabetically, and then subfolders (alphabetically again). So your patch file must be added "after" the original config which adds the processor. You may want to put all your patches e.g. inApp_Config/ZZ.Custom/my.patch.config.
|
I need to patch Sitecore's pipeline to disable one of the processors.
Can I do that, or should I remove and implement whole pipeline?
|
Sitecore. Disable a pipeline's processor
|
The short answerNo.The long answerIncrementing operators are easy to make:%+=% <- function (x, inc) x + inc
%-=% <- function (x, dec) x - decHowever, these functions don't modifyxdirectly because R tries very hard to prevent functions from modifying variables outside their scope. That is, you still need to writex <- x %+=% 1to actually updatex.Theinc<-anddec<-functions from packageHmiscwork around this restriction. So you might be surprised to find that the definition ofinc<-is just:function (x, value)
{
x + value
}That is, the code inside the function is exactly the same as in our custom%+=%operator. The magic happens because of a special feature in the R parser, which interpretsinc(x) <- 1asx <- `inc<-`(x, 1)This is how you're able to do things likenames(iris) <- letters[length(iris)].%<>%is magical because it modifiesxoutside its scope. It is also very much against R's coding paradigm*. For that reason, the machinery required is complicated. To implement%+=%the way you'd like, you would need to reverse-engineer%<>%. Might be worth raising as a feature request on their GitHub, though.*Unless you're adata.tableuser in which case there's no hope for you anyway.
|
library(magrittr)
x <- 2
x %<>%
add(3) %>%
subtract(1)
xIs there predefined a more readable way that works with pipes?Perhaps something likex %+=% 3 %-=% 1
|
elegant increment operator as pipeline
|
This isn't guaranteed to be side-effect-free, but it's probably a sane first cut:reverse_command() {
# C check the number of entries in the `BASH_SOURCE` array to ensure that it's empty
# ...(meaning an interactive command).
if (( ${#BASH_SOURCE[@]} <= 1 )); then
# For an interactive command, take its text, tack on `| tac`, and evaluate
eval "${BASH_COMMAND} | tac"
# ...then return false to suppress the non-reversed version.
false
else
# for a noninteractive command, return true to run the original unmodified
true
fi
}
# turn on extended DEBUG hook behavior (necessary to suppress original commands).
shopt -s extdebug
# install our trap
trap reverse_command DEBUG
|
The inspiration here is a prank idea, so try to look past the fact that it's not really useful...Let's say I wanted to set up an alias in bash that would subtly change any command entered at the prompt into the same command, but ultimately piped throughtacto reverse the final output. A few examples of what I'd try to do:ls ---> ls | tac
ls -la ---> ls -la | tac
tail ./foo | grep 'bar' ---> tail ./foo | grep 'bar' | tacIs there a way to set up an alias, or some other means, that will append| tacto the end of each/every command entered without further intervention? Extra consideration given to ideas that are easy to hide in a bashrc. ;)
|
Can I create a "wildcard" bash alias that alters any command?
|
EachHGETALLreturns it's own series of values, which need to be converted to strings, and the pipeline is returning a series of those. Use the genericredis.Valuesto break down this outer structure first then you can parse the inner slices.// Execute the Pipeline
pipe_prox, err := redis.Values(client.Do("EXEC"))
if err != nil {
panic(err)
}
for _, v := range pipe_prox {
s, err := redis.Strings(v, nil)
if err != nil {
fmt.Println("Not a bulk strings repsonse", err)
}
fmt.Println(s)
}prints:[title hi]
[title hello]
|
I managed to Pipeline multipleHGETALLcommands, but I can't manage to convert them to strings.My sample code is this:// Initialize Redis (Redigo) client on port 6379
// and default address 127.0.0.1/localhost
client, err := redis.Dial("tcp", ":6379")
if err != nil {
panic(err)
}
defer client.Close()
// Initialize Pipeline
client.Send("MULTI")
// Send writes the command to the connection's output buffer
client.Send("HGETALL", "post:1") // Where "post:1" contains " title 'hi' "
client.Send("HGETALL", "post:2") // Where "post:1" contains " title 'hello' "
// Execute the Pipeline
pipe_prox, err := client.Do("EXEC")
if err != nil {
panic(err)
}
log.Println(pipe_prox)It is fine as long as you're comfortable showing non-string results.. What I'm getting is this:[[[116 105 116 108 101] [104 105]] [[116 105 116 108 101] [104 101 108 108 111]]]But what I need is:"title" "hi" "title" "hello"I've tried the following and other combinations as well:result, _ := redis.Strings(pipe_prox, err)
log.Println(pipe_prox)But all I get is:[]I should note that it works with multipleHGETkey valuecommands, but that's not what I need.What am I doing wrong? How should I do it to convert the "numerical map" to strings?Thanks for any help
|
Convert Redigo Pipeline result to strings
|
Yes, MPI allows a process to send data to itself but one has to be extra careful about possible deadlocks when blocking operations are used. In that case one usually pairs a non-blocking send with blocking receive or vice versa, or one uses calls likeMPI_Sendrecv. Sending a message to self usually ends up with the message simply being memory-copied from the source buffer to the destination one with no networking or other heavy machinery involved.And no, communicating with self is not necessary a bad thing. The most obvious benefit is that it makes the code more symmetric as it removes/reduces the special logic needed to handle self-interaction. Sending to/receiving from self also happens in most collective communication calls. For example,MPI_Scatteralso sends part of the data to the root process. To prevent some send-to-self cases that unnecessarily replicate data and decrease performance, MPI allows in-place mode (MPI_IN_PLACE) for most communication-related collectives.
|
I have an upper triangular matrix and the result vector b.
My program need to solve the linear system:Ax = busing the pipeline method.
And one of the constraints is that the number of process is smaller than the number of
the equations (let's say it can be from 2 to numberOfEquations-1).I don't have the code right now, I'm thinking about the pseudo code..My Idea was that one of the processes will create the random upper triangular matrix (A)
the vector b.
lets say this is the random matrix:1 2 3 4 5 6
0 1 7 8 9 10
0 0 1 12 13 14
0 0 0 1 16 17
0 0 0 0 1 18
0 0 0 0 0 1and the vector b is[10 5 8 9 10 5]and I have a smaller amount of processes than the number of equations (lets say 2 processes)so what I thought is that some process will send to each process line from the matrix and the relevant number from vector b.so the last line of the matrix and the last number in vector b will be send to
process[numProcs-1] (here i mean to the last process (process 1) )
than he compute the X and sends the result to process 0.Now process 0 need to compute the 5 line of the matrix and here i'm stuck..
I have the X that was computed by process 1, but how can the process can send to himself
the next line of the matrix and the relevant number from vector b that need to be computed?Is it possible? I don't think it's right to send to "myself"
|
Is a process can send to himself data? Using MPICH2
|
The correct way to do this is to useif/else:if [ -e /usr/bin/logger ]
then
application param1 2>&1 | /usr/bin/logger > /dev/null &
else
application param1 > /dev/null 2>&1 &
fiEdit:In the case of a complex construct, you should use a function:foo () {
if [ ... ]
then
do_something
else
something_else
fi
while [ ... ]
do
loop_stuff
done
etc.
}Then your log/no logifstays simple:if [ -e /usr/bin/logger ]
then
foo 2>&1 | /usr/bin/logger > /dev/null &
else
foo > /dev/null 2>&1 &
fi
|
In my bash script I need to check if logger binary exists. If so, I pipe the application output to it.Edit--------|It needs to be piping, the application should work permanently.---------------I tried to put the pipeline stuff to a variable and use it later. Something like:if [ -e /usr/bin/logger ]; then
OUT=| /usr/bin/logger
fi
application param1 2>&1 $OUT > /dev/null &but it doesn't work, the output is not piped to the logger. If I put the pipeline stuff directly into the application startline, it works.Unfortunately the real script becomes too complicated if I use command lines with and without logger stuffin if/else statements - the reason is that I already have if/else there and adding new ones will double the number of cases.Simple test applicationTMP=| wc -m
echo aas bdsd vasd $TMPgives$ ./test.sh0aas bdsd vasdSeems that somehow command1 and command2 are executed separately.I managed to solve the problem (in both test and real scripts) usingevaland putting the conditional stuff in double quotes.TMP="| wc -m"
eval echo aas bdsd vasd $TMP$ ./test.sh14It feels like a workaround. What is the right way to do it?
|
Pipeline metacharacter in variable in bash
|
I think you mean STDOUT.Is theAllen Bauer's answeris what are you looking for?
|
I need to write a FILE to STDIN. This FILE going to be accessed by another EXE that goig to write the STDIN stream in a microcontroller.Could you give me a help how to write the file to STDIN using Delphi 2010?Thanks very much!
|
How write a file to STDIN Stream using DELPHI?
|
Looking at your comment are you trying to do this?Assembly.GetExecutingAssembly().GetFiles()
|> Seq.map (fun file ->
let stream = new StreamReader(file)
file, stream.ReadToEnd().Contains(keyword))
|> Seq.filter snd
|> Seq.map fst
|> Seq.iter (fun file -> printfn "%s" file.Name)Imperative style can be cleaner.for file in Assembly.GetExecutingAssembly().GetFiles() do
let stream = new StreamReader(file)
if stream.ReadToEnd().Contains(keyword) then
printfn "%s" file.Name
|
Now I can add singlevalue or tublesto the pipeline, my next question is can I add a list/array:filesUnderFolder
|> Seq.map FileInfomy problem is I can't process this with pipeline |>, or can I?Assembly.GetExecutingAssembly.GetFiles()
|>Array.map (fun file -> file.ReadAllText().Contains(keyword))
|
Piping a list into the line in F#
|
Gitlab SideCreate aTrigger Token,
thenfollow the "trigger a pipeline" howtoExample:curl --request POST \
--form token=<token> \
--form ref=<ref_name> \
"https://gitlab.example.com/api/v4/projects/<project_id>/trigger/pipeline"OR on a specific ref_name(with a branch or tag name, like main):curl --request POST "https://gitlab.example.com/api/v4/projects/<project_id>/trigger/pipeline?token=<token>&ref=<ref_name>"RhodeCode side:Your version should be at least4.17.3people saw problems between https and ssh push behaviour before this versionRhodeCode has documentation for their webhook system,
also this page mentionsStarting from 4.8.0 also repository extra fields can be used.
A format to use them is ${extra:field_key}.
It’s usefull to use them to specify custom repo only parameters.
Some of the variables like ${pull_request_id} will be replaced only in the pull request related events.so see if you really send the same fieldswith debug log enabled , the log should have some lines containingDEBUG [rhodecode.integrations.types.webhook]Debugging Webhooktry to use a testing endpoint like e.g.pustreq.comto see if your outgoing webhook sends what you wantRegards
|
I'm running a local gitlab server (omnibus) and I have a company internal repo (set up through Rhodecode). I'm trying to get Gitlab to trigger a pipeline upon receiving a webhook from our Repo once there is a push done to our repo. Then have a python script parse the JSON data from the webhook, and then depending on the message in the commit, trigger a test or not. I'm doing this all on VM's on our local server. The Gitlab is on a linux machine.
I've hunted for information on Gitlab receiving a webhook and triggering a pipeline event, but haven't got anywhere with it yet. I've tried the token generation, and checking the push webhook post on linux to see if I'm receiving any data, to no avail. Today I'm going to be comparing log files to see if it's logging the webhook presence and then go from there.
|
Executing a Gitlab pipeline from receiving a webhook
|
The newDSL2lets you definemoduleswhich can contain workflow components (i.e. functions, processes and worflows) which can be imported into another Nextflow script using theincludekeyword. Themodule inclusionexample in the docs has a typo, but it should look like:include { foo } from './some/module.nf'
workflow {
data = channel.fromPath('/some/data/*.txt')
foo(data)
}The above snippet includes a process with namefoodefined in the
module script in the main execution context. This way,foocan be
invoked in theworkflowscope.Nextflow implicitly looks for the script file./some/module.nfresolving the path against theincludingscript location.
|
Instead of having one huge Nextflow script that run all the pipelines.
To make to easier to read the file and edit pipeline down the road.
Can we write a Nextflow script that executes Multiple Nextflow scripts?
|
Can we write a Nextflow script that executes Multiple Nextflow scripts?
|
The syntax difference here is literal strings versus interpolated strings in Groovy versus shell interpreters within shell step methods.": interpolated string in Groovy': literal string in Groovy and interpolated string in shell interpreterEssentially, a Groovy variable is interpolated within"in the pipeline execution, and an environment variable is interpolated within"in the pipeline execution and within'in the shell interpreter (and within the pipeline must also be accessed within theenvobject, but is a first class variable expression in the shell step method).Therefore, we can fix the assigned value ofenv.internwith:env.intern = "TEST = ${env.test}"where the assigned value ofenv.testwill be interpolated within the Groovy string and assigned to theenvpipeline object at theinternkey. This will then also be accessible to the shell interpreter within shell step methods, and the rest of your pipeline is already correct and will behave as expected.
|
I'm using the Jenkins scripted pipeline and having trouble understanding how to nest environment variables within each other, here is a MWE:// FROM https://jenkins.io/doc/pipeline/examples/#parallel-multiple-nodes
def labels = []
if (HOST == 'true') {
labels.add(<HOSTNAME>)
}
def builders = [:]
for (x in labels) {
def label = x
builders[label] = {
ansiColor('xterm') {
node(label) {
stage('cleanup') {
deleteDir()
}
stage('build') {
env.test = "TESTA"
env.intern = '''
TEST = "${env.test}"
'''
sh '''
echo $intern
printenv
'''
}
}
}
}
}
parallel buildersThe idea here is thatenv.testcontains the valueTESTA, which setsenv.interntoTEST = TESTAthis is what I want to happen. After this the code is just to print out the values.
Sadly the result isTEST = "${env.test}".How can I use nested string environment variables in Jenkins scripted pipeline?
|
Jenkins scripted pipeline nested environment variable
|
why echo behaves differently than echo "Hi"|cat when used in a bash scriptIn the case ofecho >&3when the pipe connect to 3rd file descriptor is closed, then the bash process is killed by SIGPIPE. This is mostly becauseechois a builtin command - it is executed by bash itself without anyfork()ing.In the case ofecho | cat >&3when the pipe connected to 3rd file descriptor is closed, then the child processcatis killed bySIGPIPE, in which case the parent process continues to live, so Bash can handle the exit status.Comparestrace -ff bash -c 'mkfifo ii; cat <ii >/dev/null & pid=$! ; exec 3>ii ; kill $pid ; rm ii ; echo something >&3'vs....; /bin/echo something >&3'.
|
I'm trying to understand the reason whyechobehaves differently thanecho "Hi"|catwhen used in a bash script, with broken pipeBehaviour:echoimmediately terminates the scriptecho "Hi"|catthe pipeline terminates but the script continuesSample reproducing stepsscript_with_echo.sh:#!/bin/bash
mkfifo ii
cat<ii >/dev/null & echo "KILL : $!"
exec 3>ii
rm ii
while true; do read LINE; echo "$LINE" >&3; echo "$?">/some/external/file;donescript_with_echo_cat.sh:#!/bin/bash
mkfifo ii
cat<ii >/dev/null & echo "KILL : $!"
exec 3>ii
rm ii
while true; do read LINE; echo "$LINE"|cat>&3; echo "$?">/some/external/file;donerun a script on terminal-1 (noteKILL : <PID>)input some sample lines to terminal-1 and verify that0(success exit code) is being written to/some/external/filerunkill -9 <PID>from terminal-2input another sample line on terminal-1Ifscript_with_echo.shwas being executed in step 1,the script immediately terminates(and no error code is written to/some/external/file)Ifscript_with_echo_cat.shwas being executed in step 1,the script continues normally(and for every subsequent sample line inputs, error code 141 (SIGPIPE) is being written to/some/external/file, which is expected)Why does this different behavior arise?
|
Different behavior with echo vs echo|cat when reading to / writing from FIFOs (pipes)
|
OneHotEncodercreates a sparse matrix on transform by default. From there the error message is pretty straightforward: you can tryTruncatedSVDinstead ofPCA. However, you could also setsparse=Falsein the encoder if you want to stick toPCA.That said, do you really want to one-hot encode every feature? And then scale those dummy variables? Consider using aColumnTransformerif you'd like to encode some features and scale others.
|
I'm working on somecustomer_datawhere I, as a first step, want to do PCA, followed by clustering as a second step.Since there needs to be done encoding (and scaling) before feeding the data to the PCA, I thought it would be good to fit it all into a pipeline. - Which unfortunately doesn't seem to work.How can I create this pipeline, and does it even make sense to do it like this?# Creating pipeline objects
encoder = OneHotEncoder(drop='first')
scaler = StandardScaler(with_mean=False)
pca = PCA()
# Create pipeline
pca_pipe = make_pipeline(encoder,
scaler,
pca)
# Fit data to pipeline
pca_pipe.fit_transform(customer_data_raw)I get the following error message:---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-27-c4ce88042a66> in <module>()
20
21 # Fit data to pipeline
---> 22 pca_pipe.fit_transform(customer_data_raw)
2 frames
/usr/local/lib/python3.7/dist-packages/sklearn/decomposition/_pca.py in _fit(self, X)
385 # This is more informative than the generic one raised by check_array.
386 if issparse(X):
--> 387 raise TypeError('PCA does not support sparse input. See '
388 'TruncatedSVD for a possible alternative.')
389
TypeError: PCA does not support sparse input. See TruncatedSVD for a possible alternative.
|
How to create a pipeline for PCA?
|
We need to specify each tools on new line instead.pipeline {
agent any
tools {
nodejs "node12.14.1"
terraform "terraform-v0.12.19"
}
...
|
I need to use nodejs as well as terraform tools on build stages.The declarative pipeline I used is:pipeline{
agent any
tools { nodejs "node12.14.1" terraform "terraform-v0.12.19"}
...Only nodejs tool can be used. terraform is not installed and gives command not found error.
|
How to use multiple tools in Jenkins Pipeline
|
A typical problem withcombineLatestis that it requires all source Observables to emit at least once so if you usefilterto discard its only value thencombineLatestwill never emit anything.An easy solution is to make sure it always emits withdefaultIfEmpty:combineLatest(
names.map((name, i) => {
return userData$[i].pipe(
map(({name, age})=> { return { name, age: age * 2} }),
filter(({age}) => age < 66),
map(({name, age})=> { return { name: name.toLocaleUpperCase(), age} }),
defaultIfEmpty(null),
)
})
)Live demo:https://stackblitz.com/edit/typescript-rsffbs?file=index.tsIf your real use-case uses other source Observable thanof()that doesn't complete immediatelly you might want to usestartWithinstead.
|
This is a simplification of a complex case, where some observables in array could be filtered if the values are not valid. The problem is that the filtered observable is not allowing to the other complete the combine. What operator or approach could handle this case allowing the valid data log in the subscription?// RxJS v6+
import { fromEvent, combineLatest, of } from 'rxjs';
import { mapTo, startWith, scan, tap, map, filter } from 'rxjs/operators';
const userData$ = [
of({ name: 'Joseph', age: 23}),
of({ name: 'Mario', age: 33}),
of({ name: 'Robert', age: 24}),
of({ name: 'Alonso', age: 25})
];
const names = ['Joseph', 'Mario', 'Robert', 'Alonso'];
combineLatest(
names.map((name, i) => {
return userData$[i].pipe(
map(({name, age})=> { return{ name, age: age * 2} }),
filter(({age}) => age < 67),
map(({name, age})=> { return{ name: name.toLocaleUpperCase(), age} }),
)
})
)
.pipe(
tap(console.log),
)
.subscribe();Sample in stackblitzIf we change the value to 67 all the observables will show the data.
|
How to use combineLatest with filter in certain observable?
|
Problem:schedule_interval=NoneIn order to initiate multiple runs within your defined date range you need to set the schedule interval for the dag. For example try:schedule_interval=@dailyStart date, end date and schedule interval defines how many runs will be initiated by the scheduler when backfill is executed.Airflow scheduling and presets
|
I just started using airflow and I basically want to run my dag to load historical data. So I'm running this commandairflow backfill my_dag -s 2018-07-30 -e 2018-08-01And airflow is running my dag only for 2018-07-30. My expectation was airflow to run for 2018-07-30, 2018-07-31 and 2018-08-01.
Here's part of my dag's code:import airflow
import configparser
import os
from airflow import DAG
from airflow.contrib.operators.databricks_operator import DatabricksSubmitRunOperator
from airflow.models import Variable
from datetime import datetime
def getConfFileFullPath(fileName):
return os.path.join(os.path.abspath(os.path.dirname(__file__)), fileName)
config = configparser.ConfigParser(interpolation=configparser.ExtendedInterpolation())
config.read([getConfFileFullPath('pipeline.properties')])
args = {
'owner': 'airflow',
'depends_on_past': True,
'start_date': datetime(2018,7,25),
'end_date':airflow.utils.dates.days_ago(1)
}
dag_id='my_dag'
dag = DAG(
dag_id=dag_id, default_args=args,
schedule_interval=None, catchup=False)
...So am I doing anything wrong with my dag configuration?
|
Airflow backfill only scheduling for START_DATE
|
you can try as below witheval...not very safe though, seethisfor more infowhile read line; do eval "$line" ; done < <(cut -d';' -f1 seeds.txt)
|
I have very little experience with bash, but am attempting to put together a pipeline that reads in and executes commands line by line from an input file. The input file, called "seeds.txt", is set up like soprogram_to_execute -seed_value 4496341759106 -a_bunch_of_arguments c(1,2,3) ; #Short_description_of_run
program_to_execute -seed_value 7502828106749 -a_bunch_of_arguments c(4,5,6) ; #Short_description_of_runI separated the #Short_descriptions from the commands by a semi-colon (;) since the arguments contain commas (,). When I run the following script I get a "No such file or directory" error#!/bin/bash
in="${1:-seeds.txt}"
in2=$(cut -d';' -f1 "${in}")
while IFS= read -r SAMP
do
$SAMP
done < $in2I know that seeds.txt is getting read in fine, so I'm not sure why I'm getting a missing file/directory message. Could anyone point me in the right direction?
|
Execute commands from a single column of an input file
|
Yes, you can get user details includingusernamefrom api endpointfor userGET /users/:id
#curl example for gitlab.com
curl "https://gitlab.com/api/v4/users/<user id>?access_token=<your gitlab token>"
|
For a project I am running a pipeline that provisions servers and configures them. In the server configuration, the gitlab username needs to be added. I need the username specifically, as this is not based on either gitlab user id or user mail (so I cannot just use the gitlab_user_email variable and derive the username from there).Is there any way to fetch the user's username via their gitlab user id?
|
How to get gitlab username based on gitlab user id
|
To me the answer is obvious - the data belongs outside the image.The reason is that if you build an image with the data inside, how are your colleagues going to use it with their data?It does not make sense to talk about the data being inside or outside the container. The data will be inside the container. The only question is how did it get there?My recommended process is something like:Create an image with all your scripts, required tools, dependencies, etc; but not data. For simplicity let us name this imagepipeline.Bind mount data in volumes to the container.docker container create --mount type=bind,source=/path/to/data/files/on/host,target=/srv/data,readonly=true pipelineOf course, replace /path/to/data/files/on/host with the appropriate path. You can store your data in one place and your colleagues in another. You make a substitution appropriate for you and they will have to make a substitution appropriate for them.However inside the container, the data will be at /srv/data. Your scripts can just assume that it will be there.
|
I am very new to containers and I was wondering if there is a "best practice" for the following situation:Let's say I have developed a general pipeline using multiple software tools to analyze next generation sequencing data (I work in science). I decided to make a container for this pipeline so I can share it easily with colleagues. The container would have the required tools and their dependencies installed, as well as all the scripts to run the pipeline. There would be some wrapper/master script to run the whole pipeline, something like: bash run-pipeline.sh -i input data.txtMy question is: if you are using a container for this purpose, do you need to place your data INSIDE the container OR can you run the pipeline one your data which is place outside your container? In other words, do you have to place your input data inside the container to then run the pipeline on it?I'm struggling to find a case example.Thanks.
|
Containers with pipelines: should/can you keep your data separate from the container
|
You can getTestContainersto work with yourBitbucket Pipelinesby disablingRyuk. You also need to adddockeras a service in your script as follows:image: atlassian/default-image:2
pipelines:
default:
- step:
script:
- export TESTCONTAINERS_RYUK_DISABLED=true
# Your commands should come after setting the environment variable above
# ...
# ...
services:
- docker
definitions:
services:
docker:
memory: 2048Detailed information regarding this is providedhere.
|
I have a project that I am building with maven. The test case uses test containers to start up a MS-SQLserver instance. The pipeline is currently failing.The reason being the pipleline image I am using is:image: maven:3.6.0Which is devoid of docker and the sqlserver image.My question is:Do I create my own image with java + maven + docker + sqlserver and use that in the pipeline fileorJust have commands in the pipeline file to install what I need? I would assume this would be the slower options WRT build timeExample of bitbucket pipeline fail with testcontainers ryuk enabled:2019-09-09 07:21:22.719 WARN 416 --- [containers-ryuk] o.testcontainers.utility.ResourceReaper : Can not connect to Ryuk at localhost:32768
java.net.SocketException: Broken pipe (Write failed)
at java.net.SocketOutputStream.socketWrite0(Native Method) ~[na:1.8.0_222]
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:111) ~[na:1.8.0_222]
at java.net.SocketOutputStream.write(SocketOutputStream.java:134) ~[na:1.8.0_222]
at org.testcontainers.utility.ResourceReaper$FilterRegistry.register(ResourceReaper.java:380) ~[testcontainers-1.11.2.jar:na]
|
bitbucket pipeline with docker container
|
That is a feature of the Go template engine, although it's not a new idea, if you used unix systems, you can do the same in shell commands (think of e.g.ls |more).It's called "chaining": you specify a sequence of commands, and the output of each is used as the input of the next in the chain.It's documented attext/template:A pipeline may be "chained" by separating a sequence of commands with pipeline characters '|'. In a chained pipeline, the result of each command is passed as the last argument of the following command. The output of the final command in the pipeline is the value of the pipeline.The Go template engine only allows you to register and call functions and methods with a single return value; or 2 return values of which the second must be of typeerror(which is checked to tell if the call is considered successful, and non-nilerrors terminate the template execution with an error). So you can't chain commands that have multiple return values, and you can't specify tuples to pass multiple values to functions having multiple parameters.For more about pipelines, seegolang template engine pipelines
|
Inside Hugo templates, I'm aware that you can call a function usingfunction param:{{ singularize "cats" }}but in the documentation, I'm also seeing you can also do:{{ "cats" | singularize }}I've never encountered this way of calling functions (inside languages like Ruby/Python). Is this Go-specific, or just Hugo-specific? How is this way of calling a function called? Also, can you do it if you have more than 1 type of argument?
|
How is this invocation called?
|
The-Xmx300m -Xss512k -Dfile.encoding=UTF-8options come from the Java buildpack, which is documented onHeroku's Dev Center page for Java.The RMI options probably come from theHeroku Exec and/or Heroku CLI for Java. If you need to disable these, you can run:$ heroku config:set HEROKU_DISABLE_JMX="true"
|
I'm developing an application and it successfully runs in Heroku. I use the pipeline feature, so the same code is used in dev, staging and production.While taking a deeper look into the log of the dev app, there is one line, which confuses me a bit:Picked up JAVA_TOOL_OPTIONS: -Xmx300m -Xss512k -Dfile.encoding=UTF-8 -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=1098 -Dcom.sun.management.jmxremote.rmi.port=1099 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.local.only=true -Djava.rmi.server.hostname=172.xx.xx.xx -Djava.rmi.server.port=1099In general I understand, that the JVM takes some default parameters from the environment (like memory settings and so on).
I question myself, where all these jmx and rmi parameters come from. In my production app, they don't appear.Is this something special in the environment of the development stage of the pipeline?
I cannot find any documentation for it.App configuration:java buildpackcurrent heroku-18 stack1 hobby web dynoI don't have any config vars with the name JAVA_TOOL_OPTIONS.
So were do the additional arguments come from?
|
Picked up JAVA_TOOL_OPTIONS in Heroku contain RMI parameters
|
You would something likevect = transformer_1()#Or whatever is is meant to do!vX = vect.fit_transform(Xtrain)or whichever appropriate way you apply thetransformer_1and THENpipln1 = Pipeline([("trsfm2",transformer_2),
("estmtr1",estimator_1)])
pipln2 = Pipeline([("trsfm3",transformer_3),
("estmtr2",estimator_2)])and then apply the twoPipelineonvX
|
Suppose I have two pipelines:pipln1 = Pipeline([("trsfm1",transformer_1),
("trsfm2",transformer_2),
("estmtr1",estimator_1)])
pipln2 = Pipeline([("trsfm1",transformer_1),
("trsfm3",transformer_3),
("estmtr2",estimator_2)])The two linear pipelines share the same step,trsfm1.Is it possible to avoid calculatingtrsfm1for twice?
|
Does sklearn.pipeline support branching?
|
You cannot. Instead preserve the modelmodel = pipeline.fit(training)and use it totransformdata:training_transformed = model.transform(training)
|
I am new to Spark ML. I am trying to make a use of Spark ML Pipelines in order to chain data transformation (think of it as an ETL process). In other words, I would like to input a DataFrame, do a series of transformation (each time adding a column to this dataframe) and output the transformed DataFrame.I was looking into the documentation and the code for Pipelines in Python, but I did not get how you can get the transformed dataset out of the Pipeline. See the following example (copied from the documentation and modified):from pyspark.ml import Pipeline
from pyspark.ml.feature import HashingTF, Tokenizer
# Prepare training documents from a list of (id, text, label) tuples.
training = spark.createDataFrame([
(0, "a b c d e spark", 1.0),
(1, "b d", 0.0),
(2, "spark f g h", 1.0),
(3, "hadoop mapreduce", 0.0)
], ["id", "text", "label"])
# Configure an ML pipeline, which consists of two stages: tokenizer,
hashingTF.
tokenizer = Tokenizer(inputCol="text", outputCol="words")
hashingTF = HashingTF(inputCol=tokenizer.getOutputCol(),
outputCol="features")
pipeline = Pipeline(stages=[tokenizer, hashingTF])
training.show()
pipeline.fit(training)How can I get the transformed dataset (i.e. a dataset after the tokenizer and hashing has been carried out) from the "pipeline" object?
|
Retrieving transformed dataset from pipeline object in pyspark.ml.pipeline
|
smessageis not a method. It is a variable of typestd::stringwhich is being created by using overloaded constructor that takes pointer and size.BTW, you can usezmq::message_t::str()function directly to get anstd::string.For example:zmq::message_t msg;
// read some data...
std::string smessage = msg.str();
|
Below is the code in ZeroMQ that usessmessagemethod. I searched for its definition in thezhelpers.hppheader file, but it is not present there.#include "zhelpers.hpp"
#include <string>
int main (int argc, char *argv[])
{
// Process tasks forever
while (1) {
zmq::message_t message;
int workload; // Workload in msecs
receiver.recv(&message);
std::string smessage(static_cast<char*>(message.data()), message.size());
std::istringstream iss(smessage);
iss >> workload;
// Do the work
s_sleep(workload);
// Send results to sink
message.rebuild();
sender.send(message);
// Simple progress indicator for the viewer
std::cout << "." << std::flush;
}
return 0;
}
|
In which header file is the method smessage() defined in ZeroMQ?
|
Here is the context:The Cortex-A7 is an in-order, partial dual-issue machine. The dual integer pipelines are eight stages long; the Cortex-A7 combines full ALU (labeled "integer" in Figure 1 above) and partial ALU (labeled "dual-issue") structures, thereby enabling dual-issue instruction execution for some integer operations. Digital signal processing algorithm implementers should note, however, that both conventional multiplication and NEON SIMD operations are single issue-only (the load-store pipeline, as its name implies, handles memory read and write accesses). And all coders should note that the Cortex-A7 does not include the additional transistor- and power-consuming circuitry necessary to handle out-of-order instruction processing.Clearly, partial dual issue means dual issue for some instructions, but not others.
|
I understand that some microprocessors such as ARM Cortex A8 and A9 support dual issue pipelining i.e. they can sustain executing two instructions per cycle. I didn't quite understand the partial dual-issue as stated inTable 1for A7.
|
What is a partial dual-issue pipeline?
|
$A+$B | ...concatenates$Aand$Bbefore passing the resulting array to the pipeline. The pipeline then unrolls the (still empty) array, so you get$nullandTest-Pathis never called.$A,$B | ...constructs an array with two nested arrays before passing it into the pipeline. The pipeline then unrolls the outer array and feeds each element (the empty arrays$Aand$B) toTest-Path, thus causing the error you observed.Basically you're doing$A+$B → @()in the former, and$A,$B → @(@(), @())in the latter case.
|
I have two arrays$Aand$Bboth could potentially be empty.$A = $B = @()This works:$A+$B | Test-PathThis does not work:$A,$B | Test-PathAnd returns the error:Test-Path : Cannot bind argument to parameter 'Path' because it is an empty array.I would have expected both expressions to fail, as the+operator is adding one empty array to another, meaning the resulting array is still empty?Looking at the overall types of both methods shows that they are the same type.PS Y:\> $E = $A+$B
PS Y:\> $E.getType()
IsPublic IsSerial Name BaseType
-------- -------- ---- --------
True True Object[] System.Array
PS Y:\> $F = $A,$B
PS Y:\> $F.getType()
IsPublic IsSerial Name BaseType
-------- -------- ---- --------
True True Object[] System.ArraySo why does$A+$B&$A,$Binteract differently withTest-Path?
|
Why does $A+$B and $A,$B interact differently with Test-Path
|
You can't do it that way. You'll have to do it in 2 parts:$results = Get-ChildItem -path \\$server\e$ -Recurse | Where-Object {$_.name -eq help.txt}
if ($results) {
$results | out-file "c:\temp\$server.txt"
}
|
Is there a way when you useGet-ChildItemwith aWhere-Objectclause to have it produce the results in a text file only if there are results?Get-ChildItem -path \\$server\e$ -Recurse | Where-Object {$_.name -eq help.txt} | `
out-file "c:\temp\$server.txt"The above will produce a file regardless if there are results. I'm having trouble telling implementing the logic to only create when results are available.
|
Out-File only if Results are available. No Zero K files
|
Tryprocess substitution:B.sh <(A.sh "some script")<(...)isprocess substitution. It makes the output ofA.shavailable toB.shas a file-like object. This works as long asB.shdoes simple sequential reads.This requiresbashor other advanced shell.Process substitutionis not required by POSIX and, consequently, simple shells like dash do not support.DocumentationFromman bash:Process SubstitutionProcess substitution is supported on
systems that support named pipes (FIFOs) or the /dev/fd method of
naming open files. It takes the form of <(list) or >(list). The
process list is run with its input or output connected to a FIFO or
some file in /dev/fd. The name of this file is passed as an argument
to the current command as the result of the expansion. If the >(list)
form is used, writing to the file will provide input for list. If
the <(list) form is used, the file passed as an argument should be
read to obtain the output of list.When available, process substitution is performed simultaneously with
parameter and variable expansion, command substitution, and
arithmetic expansion.
|
I've following two scripts:A.sh
B.shA.sh is as follows:#!/bin/bash
some/path/ 2>/dev/null -jar some/path/java.jar "$1"Let's assume that A.sh takes input as:$ A.sh "some script"And we'd redirect it to some output as:$ A.sh "some script" > output.txtAnd let's assume that B.sh takes a file(file.txt)as input and process it like:$ B.sh "file.txt"Now, I need a script which can pipeline output.txt to B.sh. Something which can perform below operations in a single script? (Is it possible to do so? If not, any solution?)$ A.sh "some script" > output.txt
$ B.sh "output.txt"
|
How to redirect a file to other file
|
It is not complicated at all, if you checkOfficial Pipeline Example, they do exactly as you. However, you noticed that most of thosestages(EstimatorsorTransformers) must be fitted first, but in the example they don't do that, why?. Because, developers considered this, and programmed thePipelineclass in a way to make it do this step for you (all models instantiations, fit, transformations and predictions are done inside).
|
I really want to use the new pipeline concept that spark is pushing, but as the IDF object needs to be fit on data and then transform it I don't know how to use it. I want to do this;Tokenizer tk = new Tokenizer()
.setInputCol("text")
.setOutputCol("words");
HashingTF tf = new HashingTF()
.setNumFeatures(1000)
.setInputCol(tk.getOutputCol())
.setOutputCol("rawFeatures");
IDF idf = new IDF()
.setInputCol(tf.getOutputCol())
.setOutputCol("IDFFeatures");
Pipeline pipe = new Pipeline()
.setStages(new PipeLineStage[] {tk, tf, idf})but unless I've misunderstood that doesn't work with idf.
|
How do I use IDF in a spark pipeline?
|
<|is an operator which binds less tightly than function application, soProp.forAll fiveAndThrees <| fun number -> ...is parsed as(Prop.forAll fiveAndThrees) <| fun number -> ...Prop.forAlltakes two parameters, anArbitrary<T>and a functionT -> TestablesoProp.forAll fiveAndThreesis a function to which the right-hand side is passed.
|
I need help understanding how "<|" behaves for the following code:Prop.forAll fiveAndThrees <| fun number ->
let actual = transform number
let expected = "FizzBuzz"
expected = actualThe documentation says the following:Passes the result of the expression on the right side to the function
on left side (backward pipe operator).fiveAndThrees is a NOT a function but instead a value and it's on the left side of the operator.I interpret the above definition as:Take an input called "number" and feed it into the "transform" function. However, if we're passing the result of the expression on the right side to the function on the left side, then when and how does the input (i.e. number) actually get initialized?I just don't see it.The full test is the following:[<Fact>]
let ``FizzBuzz.transform returns FizzBuzz`` () =
let fiveAndThrees = Arb.generate<int> |> Gen.map ((*) (3 * 5))
|> Arb.fromGen
Prop.forAll fiveAndThrees <| fun number ->
let actual = transform number
let expected = "FizzBuzz"
expected = actualThe function to be tested is the following:let transform number =
match number % 3, number % 5 with
| 0, 0 -> "FizzBuzz"
| _, 0 -> "Buzz"
| 0, _ -> "Fizz"
| _ -> number.ToString()
|
How does the backwards pipeline (i.e. "<|") really work?
|
type someFile | find wordsimply fails, as find requires a "word" in quotes.So the solution is to change your line totype set_2.txt | find "%%a" > nul
if !errorlevel! EQU 0 (
....Or even simplertype set_2.txt | find "%%a" > nul || echo %%a not found
|
When I run the code, it outputsFIND: Parameter format not correctandThe process tried to write to a nonexistent pipe.. From this, I'm pretty sure the for loop can't handle the pipe and/or redirection. I'm not sure what to do from here, I've tried running it outside a loop, and that works fine, but inside the loop it chucks the dummy. Does anyone know why, or how I can fix this?@ECHO OFF
setlocal enabledelayedexpansion
for /F "tokens=*" %%a in (set_1.txt) do (
type set_2.txt | find %%a > nul
if !errorlevel! EQU 1 (
echo %%a
)
)
endlocal
pauseAnd before anyone says it, I'm aware this is not the most efficient method for finding strings in files, but it doesn't matter for the file sizes I'm dealing with.
|
Why is this pipe not working inside a for loop
|
awkis buffering when its output is not going to a terminal. If you have GNU awk, you can use its fflush() function to flush after every printgawk '{print; fflush()}' <&5 | grep fooIn this particular case though, you don't need awk and grep, either will do.awk /foo/ <&5
grep foo <&5SeeBashFAQ 9for more on buffering and how to work around it.
|
I'm testing some netcat udp shell tools and was trying to take output and send it through standard pipe stuff. In this example, I have a netcat client which sends 'foo' then 'bar' then 'foo' with newlines for each attempt reading from the listener:[root@localhost ~ 05:40:20 ]# exec 5< <(nc -d -w 0 -6 -l -k -u 9801)
[root@localhost ~ 05:40:25 ]# awk '{print}' <&5
foo
bar
foo
^C
[root@localhost ~ 05:40:48 ]# awk '{print}' <&5 | grep foo
^C
[root@localhost ~ 05:41:12 ]# awk '{print}' <&5 | grep --line-buffered foo
^C
[root@localhost ~ 05:41:37 ]#
[root@localhost ~ 05:44:38 ]# grep foo <&5
foo
foo
^C
[root@localhost ~ 05:44:57 ]#I've checked the --line-buffered... and I also get the same behavior from 'od -bc', ie, nothing at all. Grep works on the fd... but if I pipe that grep to anything (like od -bc, cut), I get the same (nothing). Tried prefixing the last grep with stdbuf -oL to no avail. Could > to file and then tail -f it, but feel like I'm missing something.Update:
Appears to be something descriptor/order/timing related. I created a file 'tailSource' and used this instead, which produced the same issue (no output) when I ranecho -e "foo\nfoo\nbar\nfoo" >> tailSource[root@localhost shm 07:16:18 ]# exec 5< <(tail -n 0 -f tailSource)
[root@localhost shm 07:16:32 ]# awk '{print}' <&5 | grep foo... and when I run without the '| grep foo', I get the output I'd expect.
(GNU bash, version 4.1.2(1)-release (x86_64-redhat-linux-gnu))
|
listening on netcat works but not grep'able (or several other utilities)
|
Well frequency is the reciprocal of time, so:1 / 1650 ps = 606 MHz = 0.606 GHzand1 / 700 ps = 1429 MHz = 1.429 GHzNote that the prefixpstands forpico, which is a multiplier of 10-12. So one picosecond (ps) is equal to 10-12= 0.000000000001 seconds.
|
My assignment deals with calculations of pipelined CPU and single cycle CPU clock rates.The following data is given, about the time each operation takes to execute:IF:400 PS
ID:100 PS
EX:350 PS
MEM:700 PS
WB:100 PS
A. What is the clock frequency if the CPU works as a single cycle? How long does it take to execute a single operation?
B. What is the clock frequency if the CPU works as a Pipelines CPU? How long does it take to execute a single operation?I know that for A a single operation takes 1650 ps to execute because in a single cycle CPU we have to perform every stage to execute a single operation. What I don't understand is why is the frequency 0.606?For B, I know that to execute we have 700 Ps, because a pipelined CPU takes to longest stage as the CPU as the time. What I don't know, is what is the answer to the frequency question?Any help is blessed.
|
Pipeline Processor Calculation
|
You can use the HTTP error code from the exception. BigQuery is a REST API, so the response codes that are returned match the description of HTTP error codeshere.Here is some code that handles retryable errors (connection, rate limit, etc), but re-raises when it is an error type that it doesn't expect.except HttpError, err:
# If the error is a rate limit or connection error, wait and
# try again.
# 403: Forbidden: Both access denied and rate limits.
# 408: Timeout
# 500: Internal Service Error
# 503: Service Unavailable
if err.resp.status in [403, 408, 500, 503]:
print '%s: Retryable error %s, waiting' % (
self.thread_id, err.resp.status,)
time.sleep(5)
else: raiseIf you want even better error handling, check out the BigqueryError class in the bq command line client (this used to be available on code.google.com, but with the recent switch to gCloud, it isn't any more. But if you have gcloud installed, the bq.py and bigquery_client.py files should be in the installation).
|
I have built a pipeline on AppEngine that loads data from Cloud Storage to BigQuery. This works fine, ..until there is any error. How can I can loading exceptions by BigQuery from my AppEngine code?The code in the pipeline looks like this:#Run the job
credentials = AppAssertionCredentials(scope=SCOPE)
http = credentials.authorize(httplib2.Http())
bigquery_service = build("bigquery", "v2", http=http)
jobCollection = bigquery_service.jobs()
result = jobCollection.insert(projectId=PROJECT_ID,
body=build_job_data(table_name, cloud_storage_files))
#Get the status
while (not allDone and not runtime.is_shutting_down()):
try:
job = jobCollection.get(projectId=PROJECT_ID,
jobId=insertResponse).execute()
#Do something with job.get('status')
except:
exc_type, exc_value, exc_traceback = sys.exc_info()
logging.error(traceback.format_exception(exc_type, exc_value, exc_traceback))
time.sleep(30)This gives me status error, or major connectivity errors, but what I am looking for is functional errors from BigQuery, like fields formats conversion errors, schema structure issues, or other issues BigQuery may have while trying to insert rows to tables.If any "functional" error on BigQuery's side happens, this code will run successfully and complete normally, but no table will be written on BigQuery. Not easy to debug when this happens...
|
How to catch BigQuery loading errors from an AppEngine pipeline
|
The reason for saving NPC in the pipeline is because sometimes the next instruction in the pipeline will want to use it.Look at the definition ofbeq. It has to compute the target address of the branch. Some branches use a fixed location for the target address, like "branch to address A." This is called "branching to an absolute address."Another kind of branch is a "relative" branch, where the branch target is not an absolute address but an offset, that is, "branch forward X instructions." (If X is negative, this ends up being a backwards branch.) Now consider this: forwards/backwardsfrom where? From NPC. That is, for a relative branch instruction, the computation for the new PC value is:NewPC = NPC + XWhy do architectures include the ability to perform relative branches? Because it takes less space. Lets say that X has a small value, like 16. The storage required for an absolute branch to a target address is:sizeof(branch opcode) + sizeof(address)But the storage for a relative branch of offset 16 is only:sizeof(branch opcode) + 1 ## number of bytes needed to hold the value 16!Of course, larger offsets can be accommodated by increasing the number of bytes used to hold the offset value. Other kinds of space-saving, range-increasing representations are possible too.
|
I just touch onpipelinetheory for a few hours. perhaps it's a easy question, but I really need your help.I know that we should storemem[pc]intoIF/IDpipeline register in fetch stage for we will decode it in next stage, also we should updatePCin fetch stage for we will feteh next instruction via that updatedPCnext cycle, but I really don't understand why we should also store NPC into pipeline register.below is an explanation derived fromComputer Organization and Design, I don't get it.This incremented address is also saved in the IF/ID pipeline register in case it is
needed later for an instruction, such as beq
|
why should we store NPC in pipeline regisger?
|
the bash manual says forset -e:The shell does not exit if the command that fails is [...]
part of any command executed in a && or || list except the
command following the final && or ||the dash manual says:If not interactive, exit immediately if any untested command fails.
The exit status of a command is considered to be explicitly tested
if the command is used to control an if, elif, while, or until;
or if the command is the left hand operand of an “&&” or “||” operator.for theANDtest, the shell will stop early, during the test of the "left hand operand".
because there are still tests, it will consider the entire command to be "tested" and thus will not abort.for theORtest, the shell has to run all (both) tests, and once the last test fails, it will conclude that there has been an unchecked error and thus will abort.i agree it's a bit counterintuitive.
|
This question already has answers here:set -e and short tests(4 answers)Closed10 years ago.Let's say we have a script like this.#!/bin/bash
set -e
ls notexist && ls notexist
echo still herewon't exit because of set -ebut#!/bin/bash
set -e
ls notexist || ls notexist
echo still herewill.
Why?
|
why bash script when set -e is set and two pipelines concatenated by && failed [duplicate]
|
Instead of usingExecutionHandleras is, you can extend it to override itshandlerUpstream()method to intercept the upstream events and callctx.sendUpstream(e)for theMessageEvents whose message meets your criteria. All other events could be handled by theExecutionHandlerviasuper.sendUpstream(e). That is:public class MyExecutionHandler extends ExecutionHandler {
public void handleUpstream(ctx, evt) throws Exception {
if (evt instanceof MessageEvent) {
Object msg = ((MessageEvent) evt).getMessage();
if (msg instanceof ExecutionSkippable) {
ctx.sendUpstream(evt);
return;
}
}
super.handleUpstream(evt);
}
...
}
|
The server I'm developing has different tasks to perform based on messages received from clients, some tasks are very simple and require little time to perform, but other may take a while.Adding an ExecutionHandler to the pipeline seems like a good solution for the complicated tasks but I would like to avoid threading simple tasks.My pipeline looks like this:pipeline.addLast("decoder", new MessageDecoder());
pipeline.addLast("encoder", new MessageEncoder());
pipeline.addLast("executor", this.executionHandler);
pipeline.addLast("handler", new ServerHandler(this.networkingListener));Where MessageEncoder returns a Message object (fordecode) which defines the requested task.Is there a way to skip the execution handler based on the decoded message?The question can be generalized to: is there a way to condition whether or not the next handler will be used?Thanks.
|
Conditional ExecutionHandler in pipeline
|
Connecting two processes by a pipe redirects the output from the first to the second. Thus, connecting a process which writes output to a process that does nothing that output means no output occurs.By contrast, connecting a process which does nothing to a process that generates output, the latter will proceed to generate output as usual.By the way, what's the purpose of thekill 0lines? I doubt very much that they serve a useful purpose here.
|
I have two programs in bash:{ { sleep 1s; kill 0; } | { while true; do echo "foo"; done; kill 0;} }and{ { while true; do echo "foo"; done; kill 0; } | { sleep 1s; kill 0; } }(just changed order).How is it possible that the first one writes a lot of "foo" in the output and the second one writes nothing?
|
Why does pipeline order matter?
|
Because of the space in write-log it sees them as multiple parameters.Try thisWrite-Log "Var1: $Var1"
|
ConsoleApplication:class Program
{
static void Main()
{
using (var runSpace = RunspaceFactory.CreateRunspace())
{
runSpace.Open();
runSpace.SessionStateProxy.SetVariable("Var1", "Alex");
using (var pipeline = runSpace.CreatePipeline("C:\\P.ps1"))
{
pipeline.Invoke();
}
}
}
}P.ps1$logFile = "C:\MyLog.txt"
function Write-Log ([string] $message)
{
Add-Content -Path $logFile -Value $message
}
Write-Log "Var1: " + $Var1I expect "Var1: Alex" is written into my log file, but I get "Var1: ".What did I wrong?Edit:My original problem is on the following site:http://nugetter.codeplex.com/workitem/31555
|
PowerShell: RunSpace.SessionStateProxy.SetVariable is not setting variable
|
I have managed to solve it by adapting the answer found inanother SO question. The trick is to create a process that accepts TOOL1 output and emits some value, like this:process BLOCK {
input:
path csv_files
output:
val ready
// Does nothing except return a value
exec:
ready = 1
}Then this process needs to be inserted into the pipeline by feeding itTOOL3results and making its outputs an additional input for TOOL1:bundled_pdb_ch = pdb_ch.collate(20)
csv2_ch = TOOL2(bundled_pdb_ch)
csv3_ch = TOOL3(bundled_pdb_ch)
semaphore_ch = BLOCK(csv3_ch.collect())
csv1_ch = TOOL1(pdb_ch, semaphore_ch)
out_ch = csv1_ch.mix(csv2_ch).mix(csv3_ch)BLOCKis only launched whencsv_ch3can be collected (ieall instances of TOOL3 have finished), andTOOL1will not run without a value fromsemaphore_ch. This value is actually ignored byTOOL1body, but it is still necessary for process to launch. And since it's a value channel, it can be read as many times as necessary without being exhausted.Technically, intermediate process is not necessary because I could just passcsv3_ch.collect()to a second channel ofTOOL1. However, it preventsTOOL1work directories from being littered withTOOL3outputs
|
I am running a Nextflow pipeline that basically sends a bunch of files through multiple tools and collects resulting CSVs; however, two of the tools I need cannot be run simultaneously (they use the same software under the hood, and we only have one license). The pipeline looks like this:workflow MY_WORKFLOW {
take:
pdb_ch
main:
csv1_ch = TOOL1(pdb_ch)
bundled_pdb_ch = pdb_ch.collate(20)
csv2_ch = TOOL2(bundled_pdb_ch)
csv3_ch = TOOL3(bundled_pdb_ch)
out_ch = csv1_ch.mix(csv2_ch).mix(csv3_ch)
emit:
out_ch
}The order in which they launch doesn't matter: I'm okay if TOOL1 processes the entire dataset and then TOOL3 starts, or if they run in alternating order, or whatever. What matters is that at any point in time there should be either one of these two running or neither, but never both.UsingmaxForksdirective allows making sure only only one instanceof a given processis running, but is there a way to generate some more general lock/semaphore that could be shared between different processes?
|
How can I make sure Nextflow processes don't run simultaneously?
|
You have built your function in the simplified style. Functions have aBEGIN,PROCESS, andENDblock, of which only theENDblock runs by default. You can fix it by declaring that you want to operate in thePROCESSscriptblock like this:Function ProcessNames ([Parameter(ValueFromPipeline=$true)][string[]]$Names) {
PROCESS{
Foreach ( $name in $Names ) {
$name
}
}
}
'bob', 'alice' | ProcessNames
|
How can I pass values to my functions in the same way the PowerShell cmdletWrite-Outputcan be passed multiple values and process them as expected?'one', 'two' | Write-Outputwhich will product the following outputone
twoI thought this was a way to do that, but it isn't:Function ProcessNames ([Parameter(ValueFromPipeline=$true)][string[]]$Names) {
Foreach ( $name in $Names ) {
# do something
}
}
'bob', 'alice' | ProcessNamesIn the above example, only the last element in the list gets processed - in this case 'alice'.What am I doing wrong?
|
How can I pass multiple values to a PowerShell function that accepts an array of input objects?
|
I could figure it out.In pipeline I added a variableCI=true.Locally I fixed it by adding file.envand addedCI=truethere. In the config I added thisreuseExistingServer: truetoplaywright.config.ts:webServer: {
command: 'ng s',
url: 'http://localhost:4200/',
reuseExistingServer: true
}
|
After running the Playwright tests in command line (e.g. by commandnpx playwright test), it always showsServing HTML report at http://localhost:0000. Press Ctrl+C to quit.How to avoid pressing Ctrl+C to quit and quit stream automatically? So I can use playwright in a pipeline (Azure).
|
How to finish running playwright tests in pipeline?
|
you can simply drop theasync/awaitfunction download() {
const source = createReadStreamSomeHow();
pipeline(source, fs.createWriteStream("file.ext"));
return source;
}The Promise is still executed, but you have no info when it is done/if it throws.Or you return the source together with the promisefunction download() {
const source = createReadStreamSomeHow();
const promise = pipeline(source, fs.createWriteStream("file.ext"));
return { source, promise }
}
// you can enqueue the source ...
let task = download();
taskArray.add(task.source);
// ... and handle the promise however you like.
await task.promise;
|
I'm stuck somehow, I have something like this in my code:async function download() {
const source = createReadStreamSomeHow();
await pipeline(source, fs.createWriteStream("file.ext"));
return source;
}I need to return the readable stream (source) and store it in an array somewhere and at the same time pipe the data from the read stream to "file.ext". But when I call:let task = await download();
taskArray.add(task);thepipeline()code pauses execution of the function, hence source is only returned when the data has been completely piped to "file.ext".Although I understand why the code behaves this way, I can't seem to find a way to return the source stream from thedownload()function and still pipe the data to "file.ext" at the same time. Is there a better way to achieve this and make it work? Thank for the help in advance.
|
Better way to return a promise that resolves to a stream in Node JS
|
You need to usedataset parametersfor this.Like folderpath parameter in pipeline create another pipeline parameter for the file name also and give@triggerBody().folderPathand@triggerBody().fileNameto those when creating trigger.Pipeline parameters:Make sure you giveall containersin storage event trigger while creating trigger.Assiging trigger parameters to pipeline parameters:Now, create two dataset parameters for the folder and file name like below.Source dataset parameters:Use these in the file path of the dataset dynamic content.If you usecopy activityfor this dataset, thenassign the pipeline parameters values(which we can get from trigger parameters) to dataset parameters like below.If you use dataflows for the dataset, you can assign these in the dataflow activity itself like below after giving dataset as source in the dataflow.
|
I need help. I've create a pipeline for data processing, which is importing csv and copy data to DB. I've also configure a Blob storage trigger, which is triggering pipeline with dataflow, when speciffic file will be uploaded in to container. For the moment, this trigger is set to monitor one container, however I would like to set it to be more universal. To monitor all containers in desired Storage Account and if someone will send some files, pipeline will be triggered. But for that I need to pass container name to the pipeline to be used in datasource file path. for now I've create something like that:in the pipeline, I've add this parameter @pipeline().parameters.sourceFolder:Next in Trigger, I've set this:Now what should I set here, to pass this folder path?
|
Azure Data Factory, how to pass parameters from trigger/pipeline in to data source
|
I got it to work by creatingpscustomobjectand piping it to function, whereValueFromPipelineByPropertyNameproperty is set to true for both parameters.function Test{
[cmdletbinding()]
param(
[parameter(ValueFromPipelineByPropertyName=$true, Mandatory=$true,Position=0)]
[string]$jeden,
[parameter(ValueFromPipelineByPropertyName=$true, Mandatory=$true,Position=1)]
[string]$dwa
)
Process{write-host "$jeden PLUS $dwa"}
}
$Params = [pscustomobject]@{
jeden = “Hello”
dwa = “There”
}
$Params |TestOUTPUT:Hello PLUS ThereVariable assigning can be skipped and[pscustomobject]can be piped directly.
|
I'm having trouble passing two parameters via pipeline to a function.function Test{
[cmdletbinding()]
param(
[parameter(ValueFromPipeline=$true, Mandatory=$true,Position=0)]
[string]$jeden,
[parameter(ValueFromPipeline=$true, Mandatory=$true,Position=1)]
[string]$dwa
)
Process{write-host "$jeden PLUS $dwa"}
}
"one", "two"|TestWhat I expected as an outcome wasone PLUS twobut what I got wasone PLUS one
two PLUS twoI'm obviously doing something wrong, since both parameters get used twice. Please advise.
|
Pass multiple parameters to function by pipeline
|
The output ofgit branch -ashows the cause of the problem: The repo does not have any local branches and it only has one remote branchremotes/origin/pipeline_creation. You need to fetch themainbranch first before you can use it in thegit diffcommand:git fetch --depth=1 origin main
|
This question already has answers here:fatal: ambiguous argument 'origin': unknown revision or path not in the working tree(13 answers)Closed10 months ago.I am working with gitlab. I have a yml file which runs the git command diff. This command shows the difference between the two branches. here is the yml fileimage: bitnami/git:latest
stages:
- Test
Test_stage:
tags:
- docker
stage: Test
script:
- echo "test stage started"
- git diff --color=always origin/main..pipeline_creation README.md | perl -wlne 'print
$1 if /^\e\[32m\+\e\[m\e\[32m(.*)\e\[m$/'when i run this in the pipeline i am getting this error:Created fresh repository.
Checking out e33fa512 as pipeline_creation...
Skipping Git submodules setup
Executing "step_script" stage of the job script
00:00
$ echo "test stage started"
test stage started
$ git branch -a
* (HEAD detached at e33fa51)
remotes/origin/pipeline_creation
$ git diff main..pipeline_creation README.md
fatal: ambiguous argument 'main..pipeline_creation': unknown revision or path not in the working tree.
Use '--' to separate paths from revisions, like this:
'git <command> [<revision>...] -- [<file>...]'Locally the command is working fine but when I run it in pipeline its not showing the expected result. Does someone know what am i doing wrong here?
|
git error: fatal: ambiguous argument 'origin/main..pipeline_creation': unknown revision or path not in the working tree [duplicate]
|
Unfortunately, there isn't a way to run a task only if another one failed. You can run it only if succeeded by calling it after the previous one, or always by usingdefer.
|
I have three tasks created withTASKFILE,main-taskand its two preconditions(or deps) e.gAandB. I need themain-taskto run taskA, if taskAworks fine and did the job then ignore the second taskB, otherwise fallback to taskBand execute it (that's why I addedignore_error: true). How to put this logic insidemain-taskusing taskfile syntax? thanksExample:---
version: 3
tasks:
A:
cmds:
- cmd: exit 1
ignore_error: true
B:
cmds:
- exit 1
main-task:
deps: # Run A only, But if it fails then Run B
cmds:
- task: # or here: Run A only, But if it fails then Run B
|
How to create a conditional task in Taskfile?
|
To illustrate a longer pipeline yet keep the output short let us first take the first 6 rows and columns. Then use brace brackets and dot as shown.library(dplyr)
mtcars %>%
head(6) %>%
select(1:6) %>%
{ list(mutate(., wt = wt + 1), mutate(., wt = wt - 1)) }giving:[[1]]
mpg cyl disp hp drat wt
Mazda RX4 21.0 6 160 110 3.90 3.620
Mazda RX4 Wag 21.0 6 160 110 3.90 3.875
Datsun 710 22.8 4 108 93 3.85 3.320
Hornet 4 Drive 21.4 6 258 110 3.08 4.215
Hornet Sportabout 18.7 8 360 175 3.15 4.440
Valiant 18.1 6 225 105 2.76 4.460
[[2]]
mpg cyl disp hp drat wt
Mazda RX4 21.0 6 160 110 3.90 1.620
Mazda RX4 Wag 21.0 6 160 110 3.90 1.875
Datsun 710 22.8 4 108 93 3.85 1.320
Hornet 4 Drive 21.4 6 258 110 3.08 2.215
Hornet Sportabout 18.7 8 360 175 3.15 2.440
Valiant 18.1 6 225 105 2.76 2.460This could also be done entirely in base R.mtcars |>
head(6) |>
subset(select = 1:6) |>
list(. = _) |>
with(list(transform(., wt = wt + 1), transform(., wt = wt - 1)))
|
Imagine that in the following example instead ofmtcarsI have some very long pipeline:list(mtcars %>%
mutate(wt = wt + 1),
mtcars %>%
mutate(wt = wt - 1))In order to not have to write the same long pipeline twice, and also to not have to save the intermediate object resulting from the pipeline, I was hoping to use the%T>%pipe frommagrittrmtcars %T>%
mutate(wt = scale(wt)) %>%
mutate(wt = wt-1)But that doesn't work. So in which other way can I get the same list as in the first snippet of code, without breaking the pipeline and without having to write it twice?
|
R - make the same pipeline save two objects in a list
|
expand()defines a list of files. If you're using two parameters, the cartesian product will be used. Thus, your rule will define as output ALL files with your extension list for ALL samples. Since you define a wildcard in your input, I think that what you want is all files with your extension for ONE sample. And this rule will be executed as many times as the number of samples.You're mixing upwildcardsand placeholders for theexpand()function. You can define a wildcard inside an expand() by doubling the brackets:rule all:
input: expand(config['bamdir'] + "{sample}.dedup.downsampled.bam{ext}", ext = config['workreq'], sample=SAMPLELIST)
rule gridss_preprocess:
input:
ref=config['ref'],
bam=config['bamdir'] + "{sample}.dedup.downsampled.bam",
bai=config['bamdir'] + "{sample}.dedup.downsampled.bam.bai"
output:
expand(config['bamdir'] + "{{sample}}.dedup.downsampled.bam{ext}", ext = config['workreq'])This expand function will expand in list{sample}.dedup.downsampled.bam.cigar_metrics{sample}.dedup.downsampled.bam.computesamtags.changes.tsv{sample}.dedup.downsampled.bam.coverage.blacklist.bed{sample}.dedup.downsampled.bam.idsv_metricsand thus define the wildcardsampleto match the files in the input.
|
I am currently using Snakemake for a bioinformatics project. Given a human reference genome (hg19) and a bam file, I want to be able to specify that there will be multiple output files with the same name but different extensions. Here is my coderule gridss_preprocess:
input:
ref=config['ref'],
bam=config['bamdir'] + "{sample}.dedup.downsampled.bam",
bai=config['bamdir'] + "{sample}.dedup.downsampled.bam.bai"
output:
expand(config['bamdir'] + "{sample}.dedup.downsampled.bam{ext}", ext = config['workreq'], sample = "{sample}")Currently config['workreq'] is a list of extensions that start with "."For example, I want to be able to use expand to indicate the following filesS1.dedup.downsampled.bam.cigar_metrics
S1.dedup.downsampled.bam.computesamtags.changes.tsv
S1.dedup.downsampled.bam.coverage.blacklist.bed
S1.dedup.downsampled.bam.idsv_metricsI want to be able to do this for multiple sample files, S_. Currently I am not getting an error when I try to do a dry run. However, I am not sure if this will run properly.Am I doing this right?
|
What is the best way to use expand() with one unknown variable in Snakemake?
|
If a machine is turned off it cannot execute a pipeline.
|
We as a team are working in a branch in GitLab. I (maintainer) registered a specific runner for this project in my laptop to execute the pipeline. Can my team members use this runner registered by me even when I shut down my system? What happens in case of I registered my runner within docker ?
|
Can my team members use a specific runner in GitLab registered for a project by me (maintainer) in my laptop even when I shut down my laptop?
|
You can consider adding this task in your pipeline:UseDotNet@2This pipeline task should update the .net version and, at the same time, update your version of MSBuild within the context of your build.This is a basic example:- task: UseDotNet@2
displayName: 'Install .NET Core SDK'
inputs:
packageType: sdk
version: 6.0.x
includePreviewVersions: falsehttps://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/reference/use-dotnet-v2?view=azure-pipelines
|
I have an angular UI and API build which builds fine in local (Visual studio 2019). But in Azure pipeline it fails with below error. In pipeline tasks I cannot find any options to upgrade/downgrade the SDK version nor the MSBuild version. Definitely there should be an option somewhere I am unable to find. Can someone please throw some light?Error : Version 5.0.407 of the .NET Core SDK requires at least version 16.8.0 of MSBuild. The current available version of MSBuild is 15.9.21.664. Change the .NET Core SDK specified in global.json to an older version that requires the MSBuild version currently available.
|
How to upgrade MSbuild version in Azure pipeline?
|
Based on your question, if you are asking how to create a variable of type boolean in Terraform, that is done like this:variable "END" {
type = bool
description = "End variable."
default = false
}You can reference that variable then in the resource definition:resource "azurerm_data_factory_pipeline" "test" {
name = .....
resource_group_name = ...
data_factory_id = ...
variables = {
"END" = var.END
}
}Or alternatively you can set it without defining the Terraform variable like this:resource "azurerm_data_factory_pipeline" "test" {
name = .....
resource_group_name = ...
data_factory_id = ...
variables = {
"END" = false
}
}
|
in terraform documentation i found the follow example:resource "azurerm_data_factory_pipeline" "test" {
name = .....
resource_group_name = ...
data_factory_id = ...
variables = {
"bob" = "item1"
}but I need to create a boolean variable, in the portal Azure I have the type field.
how can I set the variable like this:"variables": {
"END": {
"type": "Boolean",
"defaultValue": false
}
}
|
terraform azurerm_data_factory_pipeline assing type to the variables
|
You can define the variable outside of your pipeline which makes them global and accessible in every stage.// Global variables
def myname = "bar"
def myip = "foo"
pipeline {
agent none
stages {
stage('STAGE 1') {
agent {
label 'node1'
}
steps {
script {
myname = sh(returnStdout: true, script: 'uname -a').trim()
myip = sh(returnStdout: true, script: 'ec2metadata --local-ipv4').trim()
echo "Name: ${myname}, IP: ${myip}" //show BAR
}
}
}
stage('STAGE 2') {
agent {
label 'node2'
}
steps {
echo "pinging $MYNAME with ip $MYIP..." //pinging BAR with ip FOO
}
}
}
}
|
I have this script:pipeline {
environment {
MYNAME = "BAR"
MYIP = "FOO"
}
agent none
stages {
stage('STAGE 1') {
agent {
label 'node1'
}
steps {
script {
$MYNAME = sh(returnStdout: true, script: 'uname -a').trim()
$MYIP = sh(returnStdout: true, script: 'ec2metadata --local-ipv4').trim()
sh "echo $MYNAME" //show BAR
}
}
}
stage('STAGE 2') {
agent {
label 'node2'
}
steps {
sh 'echo "pinging $MYNAME with ip $MYIP..."' //pinging BAR with ip FOO
}
}
}
}What I want is to update MYNAME and MYIP with the information from STAGE 1 to use in STAGE 2.
This script is not working, because it keeps the FOO BAR from the definition in the first lines.
|
Share variables between Jenkins pipeline stages
|
You can useafter-scriptfor that. It runs regardless ifscriptfails.- step:
name: Atlassian Security Scan
clone:
enabled: true
script:
- pipe: atlassian/git-secrets-scan:0.6.0
after-script:
- echo "clean up"Source:https://bitbucket.org/blog/after-scripts-now-available-for-bitbucket-pipelines
|
How do I ignore the error so that it can proceed to next step.Why I want to ignore :This pipeline is for development environment.There is few of other issues need to be fix first, so some code need to be deployed first without waiting for security fix.The security fix is scheduled to fix at agreed time ( so there is few of other code need to be deployed first) .As you can see above it didnt go to Compress build step.:- step:
name: Atlassian Security Scan
clone:
enabled: true
script:
- pipe: atlassian/git-secrets-scan:0.6.0
|
How do I ignore fail so that it can proceed to next step for using pipe. (Bitbucket Pipeline)
|
You can useTrace-Commandto see the under-the-hood parameter binding for each of your pipeline examples (detailed documentation here).Trace-Command -Name ParameterBinding -PSHost -FilePath debug.txt -Expression {
<# your pipeline here #>
}You will be able to see both when and how often each expression is evaluated.Generally speaking, each expression is evaluated when it's command is being invoked in the pipeline - they are not frontloaded. You can test this by injecting a divide by zero error at the end of the pipeline. If the earlier commands run, then the expression at the end isn't actually being evaluated until the command at that stage of the pipeline starts (or the parameter binding starts before that command is called).
|
I was debugging a pipeline and finally realized all objects referenced the same hashtable. It was a surprise. Is the following logic correct?In the following pipeline, every instance has a property "MasterHash" with a reference to the same hashtable. The NotePropertyValue is calculated only once when the pipeline starts. Without a Foreach-Object or Where-Object, all expressions can be considered to be evaluated before any items are piped.$myObjects | Add-Member -NotePropertyName 'MasterHash' -NotePropertyValue @{x='y'}If a unique hashtable is required for each item, a Foreach-Object is required as follows.$myObjects | ForEach-Object { Add-Member -InputObject $_ -NotePropertyName 'MyHash' -NotePropertyValue @{x='y'} }or$myObjects | ForEach-Object { $_ | Add-Member -NotePropertyName 'MyHash' -NotePropertyValue @{x='y'} }I find it a little confusing. Are the expressions evaluated in a begin-process block? Or perhaps the are calculated when the script is complied? Is the Foreach-Object the correct way to access the values in the pipeline? Is there a better syntax to access the pipeline values? For example,$myObjects | Add-Member -NotePropertyName 'NotMasterHash' -NotePropertyValue ???@{x='y'}???where the ??? is some magic way to make these evaluate the expression for every item in the pipe.
|
Get pipeline value without a Foreach-Object? (add-member)
|
You cannot use shell commands like>inside a Julia command, instead, you pass the file as an extra argument:run(pipeline(`tar -cvf - "source"`, `pigz -k -9`, "source.tar.xz"))
|
I want to compress a folder in julia:tar -cvf - "source" | pigz -k -9 > "source.tar.xz"wheresourceis a folderI tried this in julia:run(pipeline(`tar -cvf - "2001_ A Space Odyssey"` , `pigz -k -9 \> "2001_ A Space Odyssey.tar.xz"`))but it didn't work. I got error below:pigz: skipping: > does not exist
pigz: skipping: 2001_ A Space Odyssey.tar.xz does not exist
2001_ A Space Odyssey/
2001_ A Space Odyssey/cover.jpg
ERROR: LoadError: failed process: Process(`pigz -k -9 '>' '2001_ A Space Odyssey.tar.xz'`, ProcessExited(1)) [1]How can I do it in julia?
|
julia shell command tar and pigz
|
Update: Added variable definition.Sure, you can accomplish this by changing the Version pipeline variable within your pipeline depending on which branch is being built.In your case, you'll want to add a Version variable within your Stage/Pipeline variables:variables:
- name: Version
value: ""Then, add this task before you attempt to pack & push your artifacts:- script: |
echo '##vso[task.setvariable variable=Version]pre-release-$(Version)'
echo "Setting Version to pre-release-$(Version)"
condition: ne(variables['Build.SourceBranch'], 'refs/heads/master')Once the variable is set to your prerelease name, you'll want to pass in the version to your NuGet push, like this:- task: DotNetCoreCLI@2
displayName: "Run NuGet Pack"
inputs:
command: "pack"
packagesToPack: "yourproject.csproj"
versioningScheme: "off"
buildProperties: Version=$(Version)
|
I have it set up so that it pushes the package version defined in the.csprojfile and is triggered based off master. Is there a way I can basically command the package to have a name such aspre-release-(package version defined in.csproj) on a commit to a branch that is NOT master?
|
How can I add "pre-release" to a package name and push it to an Azure DevOps feed?
|
When using pipelines, you need to prefix the parameters depending on which part of the pipeline they refer to with the name of the respective component (herelgb) followed by a double uncerscore (lgb__); the fact that here your pipeline consists of only a single element does not change this requirement.So, your parameters should be like (only the first 2 elements shown):best_params = {'lgb__boosting_type': 'dart',
'lgb__colsample_bytree': 0.7332216010898506
}You would have discovered this yourself if you had followed the advice clearly offered in your error message:Check the list of available parameters with `estimator.get_params().keys()`.In your case,pipe.get_params().keys()givesdict_keys(['memory',
'steps',
'verbose',
'lgb',
'lgb__boosting_type',
'lgb__class_weight',
'lgb__colsample_bytree',
'lgb__importance_type',
'lgb__learning_rate',
'lgb__max_depth',
'lgb__min_child_samples',
'lgb__min_child_weight',
'lgb__min_split_gain',
'lgb__n_estimators',
'lgb__n_jobs',
'lgb__num_leaves',
'lgb__objective',
'lgb__random_state',
'lgb__reg_alpha',
'lgb__reg_lambda',
'lgb__silent',
'lgb__subsample',
'lgb__subsample_for_bin',
'lgb__subsample_freq'])
|
I have the following pipeline:from sklearn.pipeline import Pipeline
import lightgbm as lgb
steps_lgb = [('lgb', lgb.LGBMClassifier())]
# Create the pipeline: composed of preprocessing steps and estimators
pipe = Pipeline(steps_lgb)Now I want to set the parameters of the classifier using the following command:best_params = {'boosting_type': 'dart',
'colsample_bytree': 0.7332216010898506,
'feature_fraction': 0.922329814019706,
'learning_rate': 0.046566283755421566,
'max_depth': 7,
'metric': 'auc',
'min_data_in_leaf': 210,
'num_leaves': 61,
'objective': 'binary',
'reg_lambda': 0.5185517505019249,
'subsample': 0.5026815575448366}
pipe.set_params(**best_params)This however raises an error:ValueError: Invalid parameter boosting_type for estimator Pipeline(steps=[('estimator', LGBMClassifier())]). Check the list of available parameters with `estimator.get_params().keys()`.boosting_type is definitely a core parameter of the lightgbm framework, if removed however (frombest_params) other parameters cause thevalueErrorto be raised.So, what I want is to set the parameters of the classifier after a pipeline is created.
|
Why does sklearn pipeline.set_params() not work?
|
This isthe exact query (MongoPlayground)that you need if those data are separate documents. Just add$projectstage before group and then$switchoperator. (If those field data are number, you might wanna check$bucketdb.collection.aggregate([
{
"$project": {
type: {
"$switch": {
"branches": [
{
"case": {
"$eq": [
"$type",
"software"
]
},
"then": "software"
},
{
"case": {
"$eq": [
"$type",
"hardware"
]
},
"then": "hardware"
}
],
default: "other"
}
}
}
},
{
"$group": {
"_id": "$type",
"count": {
"$sum": 1
}
}
}
])Also, I'd like to recommend avoiding field nametype. Actually it doesn't reserve in MongoDB, but however it could bring conflicts with some drivers since, in schema/model files, type fields are referred to the exact BSON type of the field.
|
The data I have is:[
{ type: 'software' },
{ type: 'hardware' },
{ type: 'software' },
{ type: 'network' },
{ type: 'test' },
...
]I want to create a MongoDB group by aggregation pipeline to return the data like this:
I only want 3 objects in result
the third object in result {_id: 'other', count: 2}, This should be the sum of counts of type other that software and hardware[
{_id: 'software', count: 2},
{_id: 'hardware', count: 1},
{_id: 'other', count: 2},
]
|
MongoDB group by aggregation query
|
I'm afraid that this is not possible. You can create two parameters:parameters:
- name: nodeSize
displayName: nodeSize
type: string
default: size1
values:
- size1
- size2
parameters:
- name: customSize
displayName: customNodeSize
type: string
default: ' 'and then check ifcustomSizesize was provided. I understood that this is far away from perfect. But we are limited to functionality we have now.
|
I am currently defining a parameter within my azure pipeline as follows:parameters:
- name: nodeSize
displayName: nodeSize
type: string
default: size1
values:
- size1
- size2The result of this, is that when attempting to run the pipeline the user is presented with a drop down menu that allows them to choose one of the defined values, as shown bellow:My goal is to create my parameters in a way that the user can select from the dropdown box, OR enter their own value. So the resulting dropdown menu would like like:Size 1Size 2<Users optional input>
|
Allow user to select pre-set or optional parameters azure pipeline
|
Agree with Krzysztof Madej.In azure devops, there is no out-of-the-box method to specify a specific version of the repo via Commit Sha.But you can achieve it through git command.Since you already have a commit sha, you could run thegit reset --hardwith Command Line Task. Then the source repo will be rolled back to the corresponding version.Here is the example:cd $(build.sourcesdirectory)
git reset --hard $commithashClassic pipelineYaml pipeline:steps:
- task: CmdLine@2
inputs:
script: |
cd $(build.sourcesdirectory)
git reset --hard CommitshaIn Azure Pipeline, the default checkout step is equivalent to git clone, it will contain commit history, so you can directly use git commad to roll back the repo version without disabling(- checkout: none) the checkout step. This could be more convenient
|
It is possible to refer to the repo from a specific hash in the stage?
For example: as a parameter I give commit's hash and thanks to this parameter I can use the repo's folder from a specific version of repository in the stage 'build'.
|
Azure DevOps pipelines: refer to old version repository
|
You should usesklearn Pipelinetosequentially apply a list of transforms:from sklearn.preprocessing import OneHotEncoder
from sklearn.impute import SimpleImputer
from sklearn.compose import ColumnTransformer
from sklearn.pipeline import Pipeline
s = pd.DataFrame(data={'Category': ['A', 'A', np.nan, 'B']})
category_pipeline = Pipeline(steps=[
('imputer', SimpleImputer(missing_values=np.nan, strategy='most_frequent')),
('ohe', OneHotEncoder(sparse=False))
]
)
transformer = ColumnTransformer(transformers=[
('category', category_pipeline , ['Category'])
],
)
transformer.fit_transform(s)
array([[1., 0.],
[1., 0.],
[1., 0.],
[0., 1.]])
|
I have a dataframe containing a column with categorical variables, which also includes NaNs.Category
1 A
2 A
3 Na
4 BI'd like to to usesklearn.compose.make_column_transformer()to prepare the df in a clean way. I tried to impute nan values and OneHotEncode the column with the following code:from sklearn.preprocessing import OneHotEncoder
from sklearn.impute import SimpleImputer
from sklearn.compose import make_column_transformer
transformer= make_column_transformer(
(SimpleImputer(missing_values=np.nan, strategy='most_frequent'), ['Category']),
(OneHotEncoder(sparse=False), ['Category'])
)Running the transformer on my training data raisesValueError: Input contains NaNtransformer.fit(X_train)
X_train_trans = transformer.transform(X_train)The desired output would be something like that:A B
1 1 0
2 1 0
3 1 0
4 0 1That raises two questions:Does the transformer computes both theSimpleImputerand theOneHotEncoderin parallel on the original data or in the order I introduced them in the transformer?How can I change my code so that theOneHotEncodergets the imputed values as an input? I know that I can solve it outside of the transformer with pandas in two different steps, but I'd like to have the code in a clean pipeline format
|
sklearn.compose.make_column_transformer(): using SimpleImputer() and OneHotEncoder() in one step on one dataframe column
|
You can use "sleep" within a stage to pause its execution.stage("B") {
steps {
echo "Pausing stage B"
sleep(time: 2, unit: "HOURS")
}
}
|
In a declarative pipeline parallel block, it is possible to specify 2nd stage to start with a lag of 2 hours after the first one has started?Let's say I have 2 stages as below:parallel {
stage('A') {
steps {
script {
sh do something
}
}
}
stage('B') {
steps {
script {
sh do something
}
}
}
}When the job is kicked off, stage A starts. 2 hours later, Stage B would start. Is this possible?
|
Jenkins Pipeline - Start a stage 2 hours after the 1st one
|
No, in-order execution pipelines can let instructionsfinishexecution out of order after starting in-order (especially loads are commonly allowed to do this, letting static instruction scheduling help hide load latency). All of this is possible without a ROB. Just scoreboarding register writes is enough to enable that, I think, even for letting ALU instructions as well as loads finish out of order.AFAIK, having a ROB is only necessary / worthwhile / has any point for a CPU that canstartexecution of instructions out of order. Hence the name ROB = ReOrder Buffer.(And a microarchitecture would normally track not-yet-executed instructions in the RS / scheduler as well. ROB tracks from issue to retire; RS tracks from issue to execute. That's using the terminology where "issue" means allocating instructions from the front-end into the out-of-order back-end. Some people call this "dispatch".)
|
Closed.This question does not meetStack Overflow guidelines. It is not currently accepting answers.This question does not appear to be about programming within the scope defined in thehelp center.Closed4 years ago.Improve this questionWe know that a ROB exists in CPUs with out-of-order pipelines to reorder u-instructions that are executed in out-of-order manner. Could anyone tell me whether or not a ROB exists in CPUs with in-order pipeline? If yes what is the duty of this structure?
|
Does a ROB exist in CPUs with in-order pipe-line? [closed]
|
$_is only populated when working on pipeline input.If you want to accept both:"string","morestrings" | ./script.ps1
# and
./script.ps1 -MyParameter "string","morestrings"... then use the following pattern:[CmdletBinding()]
param(
[Parameter(Mandatory=$true,ValueFromPipeline=$true)]
[string[]]$MyParameter
)
process {
foreach($paramValue in $MyParameter){
Write-Host "MyParameter: $paramValue"
}
}
|
I'm having trouble understanding this behavior...Given a Powershell script like this (updated with actual code)...[cmdletbinding(DefaultParameterSetName="Default")]
param (
[cmdletbinding()]
[Parameter( Mandatory=$true,
ValueFromPipeline = $true,
ParameterSetName="Default")]
[Parameter(Mandatory=$true, ParameterSetName="Azure")]
[Parameter(Mandatory=$true, ParameterSetName="AWS")]
[Alias("Server")]
[String[]] $SqlServer,
# other parameters
)
BEGIN {}
PROCESS {
<# *************************************
PROCESS EACH SERVER IN THE PIPELINE
**************************************** #>
Write-Debug "Processing SQL server $_..."
# GET SMO OBJECTS
$Error.Clear()
try {
# GET CONNECTION TO TARGET SERVER
$_svr = _get-sqlconnection -Server $_ -Login $DatabaseLogin -Pwd $Password
# PROCESS DATABASES ON SERVER
} catch {
$Error
}
} END {}It is my understanding that $_ is the current object in the pipeline and I think I understand why "Write-Host $_" works. But why does "Write-Host $InputVariable" output an empty string?How must I define the parameter so I can pass values both through the pipeline and as a named parameter (i.e. - ./script.ps -InputVariable "something")?This works: "someservername" | ./script
This does not work: ./script -SqlServer "someservername"Thank you.
|
Powershell Pipeline Processing - $_ Works but $ParamName Does Not
|
i++returns thepreviousvalue of the variable (while++ireturns the new one). So, wheniis 2049 (or any larger value), the condition is true andpanicis called. Wheniwas 2049 before, it will be 2050 after theif. It will always be incremented, regardless whether the condition was true or not. This is a fundamental rule in C, C++ and many other languages and has nothing to do with ARM or pipelines.
|
Nowadays I'm working on ARM R7 and I found a situation I can't understand why.Here's my codeif ( i++ > 2048 )
{
panic(...); <----- here it has 2050 in its coredump.
}When it gets panic'ed, it dumps the whole memory and I can load it up w/ T32.
By the way, the variable 'i' has 2050, not 2049.
I don't understand why it has such value.
Someone can explain why it does, please...PS : not multi-threaded.
|
why does my variable has a greater value than I set?
|
YAML has various ways to specify string properties:single-quoted: "a single that can have : and other weird characters"
single-unquoted: another single command (but needs to avoid some special YAML characters, such as ":"
single-split: >
a single
line string
command that's
broken over
multiple-lines
multi-line: |
a
multi-line
stringPutting that intohttps://yaml-online-parser.appspot.comyou can see how it ends up:{
"single-quoted": "a single line command",
"single-unquoted": "another single command (but needs to avoid some special YAML characters, such as \":\"",
"single-split": "a single line string command that's broken over multiple-lines",
"multi-line": "a\nmulti-line\ncommand\n"
}You can find some related questions here about this too:In YAML, how do I break a string over multiple lines?There's also some more examples here:https://yaml-multiline.infoThese are the three most common formats for Buildkite pipeline.yml commands are these:command: "simple-command"command: |
npm install
npm testcommand:
- "npm install"
- "npm test"(you can usecommandandcommandsinterchangably)For both of those last two examples, the commands in the list will be run in sequence, and fail as soon as any one of them fails. i.e. if thenpm installcommand fails, the job will immediately finish with a failed state.
|
What do these command mean in buildkit build pipeline?command:command: |command: >-I am trying to build a build pipeline and I can't find any documentations on them. What are the differences between them?example:command: |
npm installcommand: npm installcommand: >- npm install
|
What are these comands mean in buildkite?
|
You can use the following expression which uses system variables to get the current pipeline name:@pipeline().PipelineSeethis linkfor more system variables.
|
I am trying to implement mail notification in pipeline in ADF.So basically I want to send the a notification mail about the successful execution of pipeline in ADF.
I have created a demo pipeline where I m coping the data. this is working fine.
And I have created a SSIS pkg that will send email notification(also working fine.)Now I want to have the pipeline name so that the notification will have pipeline name also dynamically.
|
how to get the pipeline name dynamically in ADF?
|
Observables arelazyin the sense that they only execute values when something subscribes to itIn your second approach you are not subscribing to Observable that's why you are not getting any data. You will get data when you subscribe to it.For the same reason, yourmapoperator is also not working. themapoperator will trigger when you subscribes to the Observable.If you are not usingmessagesDataSource$anywhere else in your code than you can omit the extra assignment and directly subscribe to theObservablereturned.this.messageService.getMessagesDataSource(this.messagesContainerTitle)
.subscribe(messages => this.messagesArray = messages)Or if you want to do this in view you can useasyncpipe which automatically subscribes to your Observable under the hood
|
I am trying to assign Observable at first and then subscribe to its values. This code://assign observable
this.messagesDataSource$ = this.messageService.getMessagesDataSource(this.messagesContainerTitle);
//subscribe to its values
this.messagesDataSource$.subscribe((messages: Message[]) => {
this.messagesArray = messages;
});works fine but I don;t think if this is good approach.. I've tried to make this run with pipe() but still if there is no subscription data array will not be assigned. My third idea was to use mapthis.messagesDataSource$ =
this.messageService.getMessagesDataSource(this.messagesContainerTitle)
.map(messages => this.messagesArray = messages)but still no result. Can you give me any hint how to assign observable and then get its data in one stream?EDITmap() in a pipe is also not fired there:this.messagesDataSource$ =
this.messageService.getMessagesDataSource(this.messagesContainerTitle)
.pipe(
map((messages) => {
console.log('in pipe');
return messages
}))
|
RxJS assign observable and get data in one stream
|
I think that the main idea of this variable$Build.ArtifactStagingDirectoryis to be a clean area so you can manage the code you're pushing from your repo. As far as I know, there is no explicit information on documentation talking that this folder is empty at every new build, but there are a few "clues":You can see at theMicrosoft's Build Variables documentationthatBuild.StagingDirectoryis always purged before each new build, so you have a fresh start every build.In the documentation above you have a few cases where it explicitly cites that some folders or files are not cleaned on a new build, like the Build.BinariesDirectory variable.I've run a few build and realeases pointing to my Web App on Azure, and I never saw an unwanted file or folder that was not related to my build pipeline.I hope that helps.
|
We are using Build Pipeline in Azure DevOps to create a Deployment Artifact. Typical steps in such pipeline are:Build Solution / ProjectCopy dlls output into$Build.ArtifactStagingDirectoryPublish Artifact from$Build.ArtifactStagingDirectoryI just wonder if I can rely on the fact, that on start of each Build the Build.ArtifactStagingDirectory is empty. Or should I clean the folder as first step to be sure?From my experience the folder was always empty, but I am not sure if I can rely on that. Is that something specific to Azure hosted Agent and maybe by using custom Build agents I have to do manual clean-ups of this folder? Maybe some old files from last build could remain there? I did not found this info in documentation.Thanks.
|
Is ArtifactStagingDirectory always empty with each build in DevOps pipeline
|
Thanks @K.B for the hint.. That solution worked for me. There are two ways to fix this issueAddconcurrent: truein JJB template that uncheck theDo not allow concurrent buildsoption on Jenkins UIotherway is:UnchecktheDo not allow concurrent buildsfrom Jenkins UI.
|
I am working on Jenkins pipleline (Jenkins Version 2.138).. I pushed a change to gerrit and JobA triggered and started building on slave1 now, when I push another patchset, it saysBuild is already in progress (ETA:N/A))Any inputs, why the build is in queue when the slaves are available to accept.?
|
Build is already in progress (ETA:N/A))?
|
After a long while tinkering, I found my solution. In the case of Jenkins, a .groovy script is run. In TeamCity, I had to add a configuration parameter and click "edit" under the Spec: label. Choosing a checkbox allows me to create a pipeline similar to Jenkins. I can add as many parameters as I like.I then create a Build Step with the Runner Type set to "Command Line". I can then run a bash script on my agent. An example being:#!/bin/bash
if [[ %01. Configure% == true ]]; then
./config_environ.sh %00. Environment%
fi
if [[ %02. Build Kernel% == true ]]; then
./build_kernel.sh
fiTo run this 'pipeline', I click the three dots next to Run (Run custom build), navigate to the parameters tab, and select the build configuration I need. The UI isn't as nice as Jenkins, but it suits my needs.Attached is the final output. Hopefully this helps others in the future.
|
I'm currently in the process of migrating several dozen Jenkins Pipelines over to TeamCity and I'm just learning TeamCity. Currently we have a large Jenkins pipeline containing 70+ build steps. In Jenkins, this pipeline can be built depending on a boolean check box for each step so we can choose what steps we wish to build.For example, I want to run build steps 1, 17, 18, 22, 45, 60. And only those steps. We cant for example choose to run 17, 22, 18, 1, 60, 45. It must be sequential, but that's okay.In TeamCity, I've been reading up on build chains but this seems to be an everything or nothing choice. So my question is, is there equivalent functionality in TeamCity that allows us to manually run a sequence of chosen builds? Not manually run single builds individually.Thank you in advance!
|
Issue migrating Jenkins boolean pipeline to Teamcity build chain
|
Snakemake can manage this for you with the--clusterargument on the command line.You can supply a template for the jobs to be executed on the cluster.As an example, here is how I use snakemake on a SGE managed cluster:template which will encapsulate the jobs which I calledsge.sh:#$ -S /bin/bash
#$ -cwd
#$ -V
{exec_job}then I use directly on the login node:snakemake -rp --cluster "qsub -e ./logs/ -o ./logs/" -j 20 --jobscript sge.sh --latency-wait 30--clusterwill tell which queuing system to use--jobscriptis the template in which jobs will be encapsulated--latency-waitis important if the file system takes a bit of time to write the files. You job might end and return before the output of the rules are actually visible to the filesystem which will cause an errorNote that you can specify rules not to be executed on the nodes in the Snakefile with the keywordlocalrules:Otherwise, depending on your queuing system, some options exist to wait for job sent to cluster to finish:SGE:Wait for set of qsub jobs to completeSLURM:How to hold up a script until a slurm job (start with srun) is completely finished?LSF:https://superuser.com/questions/46312/wait-for-one-or-all-lsf-jobs-to-complete
|
In below example, if shell scriptshell_script.shsends a job to cluster, is it possible to have snakemake aware of that cluster job's completion? That is, first, fileashould be created byshell_script.shwhich sends its own job to the cluster, and then once this cluster job is completed, filebshould be created.For simplicity, let's assume that snakemake is run locally meaning that the only cluster job originating is fromshell_script.shand not by snakemake .localrules: that_job
rule all:
input:
"output_from_shell_script.txt",
"file_after_cluster_job.txt"
rule that_job:
output:
a = "output_from_shell_script.txt",
b = "file_after_cluster_job.txt"
shell:
"""
shell_script.sh {output.a}
touch {output.b}
"""PS - At the moment, I am usingsleepcommand to give it a waiting time before the job is "completed". But this is an awful workaround as this could give rise to several problems.
|
Can Snakemake work if a rule's shell command is a cluster job?
|
Yes, a root node and namespace is required by the JSON Decoder.There is a reason for this.JSON does not require the equivalent of a root node while XML does require a root node. Making it a required propertyeliminates any ambiguityabout the conversion. It's just that simple.Since you will typically need to use theXML Disassembleralso, you can strip the JSON Encoder added rood node by essentiallyDebatching the content with an Envelope Schemamatching the JSON Encoder root node. You canremove the Xml Namespaceas well which is also a good idea.
|
The BizTalk decoder pipeline component displays values of RootNode and RootNodeNamespace in the configuration of the pipeline (in BizTalk Admin Console).When I don't specify the root, it blows up and says:
"Reason: Root node name is not specifed in Json decoder properties"When I do specify a root, it gets added. But why does it need to add a root if I have a "root" in my JSON?Example JSON:{ "AppRequest": {
"DocumentName": "whatever",
"Source": "whatever"
}
}So I would like to use the JSON above, and have a schema with a root of "AppRequest". But if I drop a file with this JSON, and specify AppRequest as the Root, then I get an extra AppRequest wrapper around the AppRequest I already have.To me this is odd behavior, if you want to have a schema/contract first approach. I might agree on the JSON with my trading partner, then later build the schema in BizTalk; and now I'm locked in to a schema with a root, and an element under it with the same name again.Further, the person building the JSON is probably deserializing a class, so the class name will be the "root" of that JSON file.Yes, I could write my own decoder pipeline components ... just trying to figure out why they did it the way they did it, or if I missing something obvious.
|
BizTalk JSON Receive Pipeline Decoder - Does it have to add a root wrapper element?
|
What we want is to traverse theStream (Of ByteString) IO ()in two ways:One that accumulates the incoming lengths of theByteStrings and prints updates to console.One that writes the stream to a file.We can do that with the help of thecopyfunction, which has type:copy :: Monad m => Stream (Of a) m r -> Stream (Of a) (Stream (Of a) m) rcopytakes a stream and duplicates it into two different monadic layers, where each element of the original stream is emitted by both layers of the new dissociated stream.(Notice that we are changing the base monad, not the functor. What changing the functor to anotherStreamdoes is todelimit groupsin a single stream, and we aren't interested in that here.)The following function takes a stream, copies it, accumulates the length of incoming strings withS.scan,prints them, and returns another stream that you can still work with, for example writing it to a file:{-# LANGUAGE OverloadedStrings #-}
import Streaming
import qualified Streaming.Prelude as S
import qualified Data.ByteString as B
track :: Stream (Of B.ByteString) IO r -> Stream (Of B.ByteString) IO r
track stream =
S.mapM_ (liftIO . print) -- brings us back to the base monad, here another stream
. S.scan (\s b -> s + B.length b) (0::Int) id
$ S.copy streamThis will print theByteStrings along with the accumulated lengths:main :: IO ()
main = S.mapM_ B.putStr . track $ S.each ["aa","bb","c"]
|
I'm using the streaming-utilsstreaming-utilsto stream a HTTP response body. I want to track the progress similar to howbytestring-progressallows with lazyByteStrings. I suspect something liketoChunkswould be necessary, then reducing some cumulative bytes read and returning the original stream unmodified. But I cannot figure it out, and thestreamingdocumentation is very unhelpful, mostly full of grandiose comparisons to alternative libraries.Here's some code with my best effort so far. It doesn't include the counting yet, and just tries to print the size of chunks as they stream past (and doesn't compile).download :: ByteString -> FilePath -> IO ()
download i file = do
req <- parseRequest . C.unpack $ i
m <- newHttpClientManager
runResourceT $ do
resp <- http req m
lift . traceIO $ "downloading " <> file
let body = SBS.fromChunks $ mapsM step $ SBS.toChunks $ responseBody resp
SBS.writeFile file body
step bs = do
traceIO $ "got " <> show (C.length bs) <> " bytes"
return bs
|
How to track progress through a streaming ByteString?
|
Yes, You may use theevalto execute docker login command.For Example:eval $(aws ecr get-login --no-include-email --region us-west-2)Sample Shell script in your case would be like:#!/bin/bash
eval $(aws ecr get-login --no-include-email --region us-west-2)
# Remaining steps
sudo service docker restart
sudo docker build -t ${image} .
sudo docker tag ${image} ${fullname}
sudo docker push ${fullname}
|
I currently use the following procedure to put a docker container to ECS:Executeaws ecr get-login --profile tutorialPaste the returned stuff in the following shell scriptThe shell script which creates the container# Returned by the command in (1)
sudo docker login -u AWS -p looooong -e none https://foobar.dkr.ecr.us-east-1.amazonaws.com
# Remaining steps
sudo service docker restart
sudo docker build -t ${image} .
sudo docker tag ${image} ${fullname}
sudo docker push ${fullname}My question:Currently, I just thesudo docker login ...line every time manually. Can I somehow executeaws ecr get-login --profile tutorialand execute the returned command (with sudo) automatically?
|
How can I automatically put a container on AWS ECS?
|
you can set the parameter for individual steps in pipeline by using theset_paramfunction, and passing the key_name as<stepname>__<paramname>(joined using double underscore).This can be combined withGridSearchCVto identify the combination of parameters which maximize the score function from the give valuesp = Pipeline([('vect', CountVectorizer(tokenizer=LemmaTokenizer(),
stop_words='english',
strip_accents='unicode',
max_df=0.5)),
('clf', MultinomialNB())
g = GridSearchCV(p,
param_grid={
'vect__max_dfs':[0,3,0.4,0.5,0.6,0.7], 'vect__ngram_range': [(1,1), (1,2), (2,2)]})
g.fit(X, y)
g.best_estimator_
|
I have just discovered Sklearn's pipeline feature which I think will be useful for sentiment analysis. I have defined my pipeline in the following way:Pipeline([('vect', CountVectorizer(tokenizer=LemmaTokenizer(),
stop_words='english',
strip_accents='unicode',
max_df=0.5)),
('clf', MultinomialNB())However, by defining it in the way above, I am not allowing for parameter tuning. Let's say I want to look at the following max_dfs=[0,3,0.4,0.5,0.6,0.7] and also the following n_gram ranges = [(1,1), (1,2), (2,2), and use cross validation to find the best combination. Is there a way to specify this in our outside the pipeline so it knows to consider all possible combinations? If so, how would this be done?Thank you so much for your guidance and help!
|
How do to parameter tuning/cross-validation with Sklearn's pipeline?
|
If you consider a typical 5 stage pipeline (IF, ID, EX, MEM, WB), the output of theADDwill be available atEX -> MEMinterface. For theMEMstage of theSWinstruction, it needs the memory address, which is0 + ($t2)and the data which is supposed to be in$t0. However$t0has not been updated yet, as the pipeline has not reachedWBstage. However the value which is supposed to be written to$t0is available atEX->MEMstage. Therefore, you can use forwarding in this scenario to executeSWinstruction without waiting forADDto complete.
|
On a static two-issue pipeline for MIPS, can I use the forwarding paths with two instructions running in the same clock cycle?For example:1. add $t0, $t0, $t1
2. sw $t0, 0($t2)Can I execute these two instructions on the same clock cycle?
Theswcould use the resulting value of theaddwhen it is going to execute the MEM stage.Is that correct?
|
MIPS - Forwarding in static multiple-issue
|
This is what the-soption is for:curl -Ls example.com/script.sh | bash -s -- --option1Since-sexplicitly tellsbashto read its commands from standard input, it doesn't try to interpret its first argument as a file from which to read commands. Instead, all arguments are used to set the positional parameters.Alternatively, you can use a process substitution instead of reading directly from standard input.bash <(curl -Ls example.com/script.sh) --option1
|
We know may make following in bash:curl -Ls example.com/script.sh | bashBut can we pass some arguments?curl -Ls example.com/script.sh | bash --option1In this casebashwill take the option. May there some approach to pass it to the script?
|
Is there a way to pass arguments/options to pipeline-executed script?
|
How about this:while true; do
files=( $(ls) );
if [[ ${#files[@]} -eq 0 || (`lsof -c write_process -- ${files[0]}`)]]; then
sleep 5;
else
process ${files[0]};
rm ${files[0]};
fi;
done...or this (might be unhealthy):while true; do
files=( $(ls) );
if [ ${#files[@]} -ne 0 && ! (`lsof -c write_process -- ${files[0]}`)]; then
process ${files[0]};
rm ${files[0]};
fi;
doneWhere, as was pointed out, I have to ensure that the file is completely written into the folder. I do this by checking on lsof for the process that writes files. however, I am not sure yet how to address multiple executions of the same program as one process...
|
In a Linux shell, I would like to treat a folder like a bag of files. There are some processes that put files into this bag. There is exactly one process that is in either one of the two following states:Process a document, then delete itWait for an arbitrary document to exist in the folder, then process itIt does not matter in which order the documents are processed or what their name is.What would the unique process taking files from the folder look likein bash? Processing means calling another program with the filename as argument.Note that this unique process will not terminate until I do so manually.
|
bash - Pick files from folder, process, delete
|
There is a very nice tool out there which helps you a lot in being able to create a pipeline component from scratch. It will create the 'bodywork' for a pipeline component, so you can start development right away.The tool is called:BizTalk Server Pipeline Component WizardOnce this is in place, create a custom receive pipeline component.Depending on whatstage of the receive pipeline componentyou would want the pipeline component to execute, you will need to get your hands dirty and copy and 'change' the message.Here is a nice blog article which gives a nice overview, step-by-step, on how to do the above (with exception of the editing):http://geekswithblogs.net/bosuch/archive/2012/01/24/creating-a-custom-biztalk-2010-pipeline-componentndashpart-i.aspxAnd here is a nice link which gives a sample on how you would potentially change a message in a pipeline component:https://dipeshavlani.net/2011/04/15/modifying-xml-document-in-a-custom-pipeline-component/Hope this helps!
|
We use an MSMQ type receive location, but have noticed when it receives messages they contain escaped XML.
I'm thinking I need a receive pipeline to unescape it, can anyone tell me what component(s) should be used? I can't see anything obvious in the toolbox.thanks
|
Receive pipeline to unescape XML
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.