Response
stringlengths 8
2k
| Instruction
stringlengths 18
2k
| Prompt
stringlengths 14
160
|
---|---|---|
There is aversiondirective, but I've never seen it used in practice:# Snakefile
version: "1.2.3"
rule all:
input: 'test.txt'
rule test:
output: temp(touch('test.txt'))There isan example/test for CWL conversionthat usesversioninside rule definition, where perhaps it might be useful, but otherwise the utility of including explicit version inside Snakefile is not clear.
|
Is there any standard or recommended way to add a version number to a pipeline (written in snakemake in my case)?For example, I have thispipelineand just now I added aCHANGELOG.mdfile with the current version on top. Are there better ways to identify the version a user is deploying?
|
Add version to pipeline
|
After hours of modifying my pipeline and reading the documentation, I finally figured out how to fix this problem. It is not because of the yaml file. The error must be related to the pipeline.The solution is to create a new pipeline. The same yaml file will work fine in the new pipeline.
|
I have a problem in the ADOS system that the pipeline fails because there is no "pool" specified. Also the validation shows this error. However, I have defined a pool.trigger:
branches:
include:
- 'main'
pool:
name: my-pool
demands:
- my_pool_demands
[...]Do you have any clue?I tried torun the pipeline with other poolrun the pipeline without poolrun the pipeline with minimum tasks (only build task)run the pipeline without any comments in itNothing could change the "No pool was specified" error.
|
No pool was specified, although a pool is specified in the Azure pipeline
|
Firstly I'd advise you to split your script into different steps of a single job or multiple jobs with many steps because this makes it easier to parallel them in the future allowing you to speed up the build time.In order to execute your script directly from the project folder you can leverage the option working directory:- task: Bash@3
inputs:
targetType: 'inline'
script: |
./setup.sh
failOnStderr: true
workingDirectory: "$(System.DefaultWorkingDirectory)/projectFolder/"However in your case you could point directly to the script, without the need to run it as a "script"- task: Bash@3
inputs:
targetType: 'filePath'
filePath: "$(System.DefaultWorkingDirectory)/projectFolder/setup.sh"
failOnStderr: true
workingDirectory: "$(System.DefaultWorkingDirectory)/projectFolder/"ref.:https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/utility/bash?view=azure-devops
|
I have a script that is nicely performing all kinds of dependency installation and some manual works (NPM installation, some manual steps to do while setting up project) to setup a project before it is able to run. The script runs perfectly fine in a local environment.Im now trying to build my pipeline in Azure DevOps, I realized I can't just fire the script right away. Runningnpm installinside the script is not actually running within my project folder but it always runs on the path/Users/runner/workQuestion:How can I execute the script within my project folder?Sample code in my script fileset -e
# Setup project dependencies
npm install
# some mandatory manual work
.....
# Pod installation
cd ios
pod installMy AzurePipelines.yml- task: Bash@3
inputs:
targetType: 'inline'
script: |
sh $(System.DefaultWorkingDirectory)/projectFolder/setup.sh
failOnStderr: trueIssue log from Azure(as you can see, the npm installation is not working due to incorrect path, hence further actions within pipeline will fail)npm WARN saveError ENOENT: no such file or directory, open '/Users/runner/work/package.json'
npm notice created a lockfile as package-lock.json. You should commit this file.
npm WARN enoent ENOENT: no such file or directory, open '/Users/runner/work/package.json'
npm WARN work No description
npm WARN work No repository field.
npm WARN work No README data
npm WARN work No license field.
|
Running bash script within a project folder of AzureDevOps
|
You can set up the parameterswith_meanandwith_stdofStandardScaler()as False to represent no standerdization. In theGirdSearchCV, the parameterpara_gridcan be set up asparam_grid = [{'scale__with_mean': [False],
'scale__with_std': [False],
'mnl__solver': ['newton-cg', 'lbfgs', 'liblinear', 'sag', 'saga'],
'mnl__max_iter':[500,1000,2000,3000]
},
{'mnl__solver': ['newton-cg', 'lbfgs', 'liblinear', 'sag', 'saga'],
'mnl__max_iter':[500,1000,2000,3000]}
]Then the first dict in the list is "No Scaler+mnl" and the second is "Scaler+mnl"Ref:https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.htmlhttps://scikit-learn.org/stable/tutorial/statistical_inference/putting_together.htmlEdit:
I think it's complicated if you also considering turn on/off PCA... Maybe you need to define a customised PCA which derives the original PCA. And then define additional boolean argument which determines whether the PCA should be executed or not...class MYPCA(PCA):
def __init__(self, PCA_turn_on, *args):
super().__init__(*args)
self.PCA_turn_on = PCA_turn_on
def fit(X, y=None):
if (PCA_turn_on == True):
return super().fit(X, y=None)
else:
pass
# same for other methods defined in PCA
|
I want to run a logistic regression usingGridSearchCV, but I want to contrast the performance when Scaling and PCA is used, so I don't want to use it in all cases.I basically would like to include PCA and Scaling as "parameters" of theGridSearchCVI am aware I can make a pipeline like this:mnl = LogisticRegression(fit_intercept=True, multi_class="multinomial")
pipe = Pipeline([
('scale', StandardScaler()),
('mnl', mnl)])
params_mnl = {'mnl__solver': ['newton-cg', 'lbfgs', 'liblinear', 'sag', 'saga'],
'mnl__max_iter':[500,1000,2000,3000]}The thing is that, in this case, the scaling would be applied in all folds, right?
Is there a way to make it so it's "included" in the gridsearch?EDIT:I just readthis answerand even though it's similar to what I want, it's not really it, because in that case the Scaler will be applied to the best estimator out of the GridSearch.What I want to do is, for example, let's sayparams_mnl = {'mnl__solver': ['newton-cg', 'lbfgs']}I want to run the regression with Scaler+newton-cg, No Scaler+newton-cg, Scaler+lbfgs, No Scaler+lbfgs.
|
Including Scaling and PCA as parameter of GridSearchCV
|
You can change this behavior in the global configuration: Administration › General Settings > Security, setting "Enable local webhooks validation” to false.
|
Im testing using a Sonarqube Server Community EditionVersion 8.4.1 (build 35646) and Jenkins Server 2.235.5. Both are in the same machine.Im trying to implement Sonarqube functionalities in my Jenkins Pipeline following some tutorials.This is the Pipeline Stage/Step where I have defined the Sonarqube implementation:When I want to define a webhook in Sonarqube, appears me this message:I already try to update windows host file with another name, but no working :(
|
Sonarqube webhook not valid with Jenkins Localhost
|
You can begin to understand the need for latches by imagining that they are removed.The secret is to realize that it takes each block 100 picoseconds to produce valid results. Before that time, the output is invalid, aka junk and not as you might think, the previous result. Remember, these a combinatorial blocks that have no memory.Now imagine that we place new data on the inputs of Block A every 100 picoseconds.What will the output look like? Well as soon as the new data is presented to the inputs, the outputs of that block are invalid. This means that Block B has invalid inputs and cannot begin processing data until they are valid.Now after 100 picoseconds, Block A has valid data going out and Block B can finally begin. But no, the input to Block A changes and Block B has invalid inputs again. The only way to get a valid result through all three is to hold the inputs valid for the whole 300 picoseconds needed to get through all three blocks.With latches, the valid results from each block are latched and do not change with changing inputs. Thus we can present new data every 100 + 20 picoseconds versus every 300 picoseconds. Or, with pipeline latches the circuit runs 2.5 times faster.
|
Hi I'm reading an textbook that descrbes the piplelined desgin of CPU.
I don't understand why we still need clocked registers? for example, as the picture belows shows:if we can remove all three registers, we can save 60ps, because we just need the processor to continuely execute instructions, so when
a comb logic finishes, that's when the next instruction should start to execute, why we need clock cycle to manually control the beginning of executing instructions?
|
What's the purpose of clocked registers in pipelined processor
|
There will be no performance difference, ternary operator is just a syntactic sugar.From ISO/IEC 9899 C Standard (draft, page 90):6.5.15 Conditional operator(...)SemanticsThe first operand is evaluated; there is a sequence point after its
evaluation. The second operand is evaluated only if the first compares
unequal to 0; the third operand is evaluated only if the first
compares equal to 0; the result is the value of the second or third
operand (whichever is evaluated), converted to the type described
below. (...)
|
I've heard compilers are very smart and know how to optimize if / else statements.I've also heard ternaries are high performance because they go through the CPU's instruction pipeline less.Let me clarify, based on what I've heard:An if / else must pass its condition through the pipeline and wait for the result before it can perform the calculations for the outcome.However a ternary can pass both the outcomes' calculations to the cpu without having to wait for the boolean expression to pass through the pipeline.So, which is faster, ternaries or if / else ?
|
What is the Performance Cost of Ternary Operator
|
One way of filling the branch delay slot would be:addiu $2, $2, 4 # We'll now iterate over [$2+4, $10] instead of [$2, $10[
LOOP: lw $1, 96 ($2)
addi $1, $1, 1
sw $1, 496 ($2)
bne $2, $10, LOOP
addiu $2, $2, 4 # Use the delay slot to increase $2
|
I have the following MIPS code and I am looking to rewrite/reorder the code so that I can reduce the number ofnopinstructions needed for proper pipelined execution while preserving correctness. It is assumed that the datapath neither stalls nor forwards. The problem gives two hints: it reminds us that branches and jumps are delayed and need their delay slots filled in and it hints at chaging the offset value in memory accesss instructions (lw,sw) when necessary.LOOP: lw $1, 100 ($2)
addi $1, $1, 1
sw $1, 500 ($2)
addiu $2, $2, 4
bne $2, $10, LOOPIt's quite obvious to me that this code increments the contents of one array and stores it in another array. So I'm not exactly seeing how I could possibly rearrange this code since the indices need to be calculated prior to completing the loop.My guess would be to move thelwinstruction after the branch instruction since (as far as I understand) the instruction in the delay slot is always executed. Then again, I don't quite understand this subject and I would appreciate an explination. I understand pipelining in general, but not so much delayed branching. Thanks
|
Delayed Branching in MIPS
|
The main problem is that *.sav is not a regular expression, it's a glob. You likely wanted to grep for\.sav, which you have to escape once for the shell and once again because of the antiquated``syntax:numOfFiles=`ls $1 | grep -o \\\\.sav | wc -w`Additionally, you should not parse the output ofls, andwc -wwill cause filenames with spaces to be counted multiple times.A better solution would be to use find:numOfFiles=$(find "$1" -maxdepth 1 -name '*.sav' | wc -l)For an even better, Bash specific solution, see 1_CR's answer.
|
I would like to count the number of*.savfiles in the$1folder.
I tried to use the following command but it doesn't work, what is the problem?numOfFiles=`ls $1 | grep -o *\.sav | wc -w`
|
assigning pipeline result in a variable
|
From the Intel Optimization Reference Manual, the branch prediction unit contains a Return Stack Buffer precisely to predictretinstructions (section 2.2.2.1) more accurately. The instruction queueing and decode unit also tracks changes in the stack pointer to improve decoding bandwidth (section 2.2.2.5).In more detail, section 3.4.1.4 describes some "rules", mostly directed to compiler writers, to benefit from inlining, calls & returns - the most relevant is probably that a near/far call must be paired with a near/far return, which means pushing the return address on the stack and jumping to the callee is not recommended. Also, the call depth is recommended to not exceed 16 nested calls (the size of the RSB).If those rules are followed, you can effectively treat them like indirect branches during branch selection (section 3.4.1.6), with everything that implies. You will most likely never encounter a stall on aret, except on pathological cases or self-modifying code.
|
Since theretinstruction is an indirect call, does theretinstruction on x86 stall the pipeline, or is it somehow optimized to behave like a more direct call?
|
Does "ret" stall the pipeline?
|
ORR r0, r1, #0x4is perfectly fine in standard ARM. You can encode immediate values in a 32-bit ARM instruction, but their range is limited.See this explanation.Your link points to the Thumb documentation; are you sure you need to be using Thumb instructions?
|
I'm writing ARM assembly code that at some point has to set a single bit of a register to 1. This is best done of course via "register-or-bitmask" method. However, according to ARM documentation, the AssemblyORRcommand (bitwise OR) doesnottake immediate values. In other words you can only bitwise-OR a value in one register with a value in another register. When you think about it, it makes sense because ARM instructions are themselves 32-bit long, so there's no way to cram a 32-bit mask into an instruction. However, writing an immediate value to a register just to use it right a way is inefficient because it produces aread-after-writehazard which stalls the CPU. In general, what is the most efficient way toORRa register with a mask without wasting a register on constantly keeping that mask in memory? Does ARM recommend anything?
|
What is the most efficient way to set one bit of a register in ARM?
|
Well, for one thing you can skip the use ofsndby returning a single value rather than a tuple from the previous function:...
|> SolveEquasion
|> (fun (det, solution) ->
printfn "Determinant = %f\nSolution = %A" det (Array.toList solution)
solution )
|
So I have a function SolveEquasion that returns a pair float*float[]. What is the best way to print the number and the array and continue working with the array? I made the following code but it seems there is a better way...
|> SolveEquasion
|> (fun (det, solution) -> printfn "Determinant = %f\nSolution = %A" det (Array.toList solution), solution )
|> snd
|
printfn in pipeline
|
The default behaviour if you don't specify a condition is to only run ifallprevious steps/jobs/tasks in the dependency tree have succeeded. And since you have a task in the previous steps which is skipping, the following stage is not running.I think you can add something like this:dependsOn: Download_from_source
condition: succeeded('Download_from_source')
|
Guys
Im currently working on an azure devops yaml pipeline, and i have a weird problem.
For some reason on of my stages (marked in red) gets skipped, even tho it doesnt have any conditions definded.Heres the code of the stage and the previous stage:Stage before the one that gets skipped:Stage that gets skipped:Any Ideas what the problem could be?
|
Stage of pipeline gets skipped even tho it doesnt have any conditions
|
to answer your questions:I am afraid that this approach will also perform class balancing with new data predicting.This is not correct, where did you get this?Am I correct not to balance classes in testing data?Class balancing usually works by adding or removing rows (or adjusting weights). All those steps should not be applied during the prediction step, as we want exactly one predicted value for each row in the data. Weights on the other hand usually have no effect during the prediction phase.
Your assumption is correct.If so, is there a way of doing this in mlr3?Just use thePipeOpas described in the blog post.
During training, it will do the specified over- or under- sampling, while it does nothing during the prediction.Cheers,
|
Lately I have been advised to change machine learning framework to mlr3. But I am finding transition somewhat more difficult than I thought at the beginning. In my current project I am dealing with highly imbalanced data which I would like to balance before training my model. I have found out this tutorial which explains how to deal with imbalance via pipelines and graph learner:https://mlr3gallery.mlr-org.com/posts/2020-03-30-imbalanced-data/I am afraid that this approach will also perform class balancing with new data predicting. Why would I want to do this and reduce my testing sample ?So the two question that are rising:Am I correct not to balance classes in testing data?If so, is there a way of doing this in mlr3?Of course I could just subset the training data manually and deal with imbalance myself but that's just not fun anymore! :)Anyway, thanks for any answers,Cheers!
|
Dealing with class imbalance with mlr3
|
Data forwarding overcomes some hazards, with the recognition that the necessary value computed by a prior instruction is available sooner than when it appears back in the register. So data forwarding is always a win over stalling and NOPs.Of course, stalling is sometimes necessary, as in the case you describe with a load-use hazard. In the small, stalling has the same effect as NOPs, however:Code size is smaller without the NOPs. Code size has a huge effect in the instruction cache -- this affects performance and thus code size cannot be ignored.Also, from a perspective of architecture longevity, while we may know the number of NOPs needed for some micro-architecture design, this will most likely change in future micro-architectures, so the NOPs inserted in an older program are no longer doing their job properly on the newer hardware. Thus, we conclude that is better to let the hardware stall rather than inserting NOPs.For example, an out-of-order machine may internally rearrange instructions to cover a MEM->EX hazard (NOPs would just get in the way).
|
We can use bothNOPs, data forwarding and stall cycles to resolve data and load-use hazards. However if we have multiple data hazards, then it becomes quite inefficient to resolve all of them usingNOPs, as they would increase the runtime of the program. In comparison to that, if we have a load use hazard, we can use data forwarding and stall cycles to resolve the hazard and it gives a more efficient result. My question is, how is data forwarding in combination with stall cycles a more efficient way of dealing with data hazards compared toNOPs? Because when we add a stall cycle then the program has to wait a clock cycle to allow for the data forwarding (MEMtoEX). Thus the clock cycle count will be increased by 1.
|
Why are data forwarding and stall cycles more efficient than NOPs for dealing with load-use hazards?
|
There are a few ways to call a R script and run it. One of them would besource().Source evaluates the r script and does so in a certain environment if called so.
Say we have aTest.Rscript:#Test.R
a <- 1
rm(list = ls())
b <- 2
c <- 3and global variables:a <- 'a'
b <- 'b'
c <- 'c'Now you would like to run this script, but in a certain environment not involving the global environment you are calling the script from. You can do this by doing creating a new environment and then callingsource:step1 <- new.env(parent = baseenv())
#Working directory set correctly.
source("Test.R", local = step1)These are the results after the run, as you can see, the symbols in the global environment are not deleted.a
#"a"
b
#"b"
step1$a
#NULL
#rm(list = ls()) actually ran in Test.R
step1$b
#2Note:
You can also run a R script by usingsystem. This will, however, be ran in a different R process and you will not be able to retrieve anything from where you called the script.system("Rscript Test.R")
|
Scenario:Let's say I have a masterpipeline.Rscript as follows:WORKINGDIR="my/master/dir"
setwd(WORKINDIR)
# Step 1
tA = Sys.time()
source("step1.R")
difftime(Sys.time(), tA)
# Add as many steps as desired, ...And suppose that, withinstep1.Rhappens a:rm(list=ls())Question:
How can Iseparatethepipeline.R(caller) environment from thestep1.Renvironment?
More specifically, I would like to runstep1.Rin separate environment such that any code within it, like therm, does not affect the caller environment.
|
R - Create a separate environment where to source() an R script, such that the latter does not affect the "caller" environment
|
You can copy the same pipe and use it in VideoCapture (if you built OpenCV with gstreamer modules).Important point is you need to finish the pipe with anappsinkelement.const char* pipe = "rtspsrc location=\"rtsp://192.168.0.220:554/user=admin&password=admin&channel=1&stream=0.sdp?real_stream--rtp-caching=10\" latency=10 ! rtph264depay ! h264parse ! omxh264dec ! videoconvert ! appsink";
VideoCapture cap(pipe);
|
I'm trying to open an IP camera in OpenCV using gstreamer pipleine.
I can open the IPcamera using Gstreamer in terminal, using :gst-launch-1.0 -v rtspsrc location="rtsp://192.168.0.220:554/user=admin&password=admin&channel=1&stream=0.sdp?real_stream--rtp-caching=10" latency=10 ! rtph264depay ! h264parse ! omxh264dec ! videoconvert ! xvimagesinkNow with this how can I open the same camera in OpenCVvideoCapture().Any help is appreciated.
|
Gstreamer pipeline in Opencv videoCapture()
|
Since adictionary in Python is an unordered collectionandITEM_PIPELINEShas to be a dictionary (as a lot of other settings, like, for example,SPIDER_MIDDLEWARES), you need to, somehow, define an order in which pipelines are applied. This is why you need to assign a number from 0 to 1000 to each pipeline you define.FYI, if you look into Scrapy source, you'll findbuild_component_list()function which is called for each setting likeITEM_PIPELINES- it makes a list (ordered collection) out of the dictionary you define inITEM_PIPELINESusing dictionary values for sorting:def build_component_list(base, custom):
"""Compose a component list based on a custom and base dict of components
(typically middlewares or extensions), unless custom is already a list, in
which case it's returned.
"""
if isinstance(custom, (list, tuple)):
return custom
compdict = base.copy()
compdict.update(custom)
items = (x for x in six.iteritems(compdict) if x[1] is not None)
return [x[0] for x in sorted(items, key=itemgetter(1))]
|
In scrapy documentation there is this information:Activating an Item Pipeline componentTo activate an Item Pipeline component you must add its class to the
ITEM_PIPELINES setting, like in the following example:ITEM_PIPELINES = {
'myproject.pipelines.PricePipeline': 300,
'myproject.pipelines.JsonWriterPipeline': 800, }The integer values you assign to classes in this setting determine the
order they run in- items go through pipelines from order number low to
high. It’s customary to define these numbers in the 0-1000 range.I do not understand the last paragraph, mainly "determine the
order they run in- items go through pipelines from order number low to
high", can you explain in other words? that numbers are chosen because of what? in the range is 0-1000 how to choose the values?
|
Scrapy - Activating an Item Pipeline component - ITEM_PIPELINES setting
|
If your pipeline is very much like a chain of method calls,use a chain of method calls!There's no point making the solution more complicated than it needs to be; if it's well-modelled by a chain of methods calls, just use that. (Or functions, which you can compose.)If you need something slightly more complicated but you don't actually need any message-passing, you might want something likeAsyncFPorScala.Rx.If you need a multi-core solution, but you have stretches that look like method calls, then have a chain of method calls inside one stop. You could use Akka streams for that without having to worry so much about the overhead to useful computation ratio.
|
Im curious about the current libraries for Scala & Akka which would allow me to elegantly build a workflow pipeline.In my case a workflow is just a DAG of operations so actors/Akka feels like a good fit.My question is what's the best approach? There are Libs like reactive streams which allow really elegant composition of a pipeline but they seem very record focused.My use case is a flow of operations passing messages between them. Future composition is nice but syntax becomes unwieldy after a while. Maybe there is something better with scalaz and shapeless.What are the approaches and tools to building a DSL for pipelines of computation steps using message passing?
|
Scala pipelines - DSL for building a DAG workflow
|
Because it can't be sure it has tee'd all the output if the child process is still running (and still has its standard output open).Since the parent and child use the same standard output (which is connected totee's input, due to the pipe), there is no way forteeto distinguish them. Since it consumes all input, both the parent and child must close their standard output (or terminate) beforeteewill see and end-of-input condition.If you wantteeto exit when the parent script does, you should redirect output of the child (to a file or to/dev/nullfor example).
|
I have a server script that runs mysqld and forks to continue running. As an example:./mysqld <parameters> &
echo "Parent runs next line in script."
<do more stuff>Why does tee wait for the child process to end before it ends itself?EDIT:For example, the following always hangs:./myscript | tee -a logfile.log
|
Why does tee wait for all subshells to finish?
|
You need to wrap up your execution code into a PROCESS block:function MyScript {
param(
[Parameter(Mandatory=$true,
ValueFromPipeline=$true,
ValueFromPipelinebyPropertyName=$true)]
[System.IO.DirectoryInfo[]] $PsPath
)
PROCESS {
$PsPath
}
}
gci c:\ -Directory | MyScriptDon Jones has a nice rundown of the BEGIN, PROCESS, & END blocks here:http://technet.microsoft.com/en-us/magazine/hh413265.aspx
|
I try to write a Powershell script that accepts directories from the pipeline as named parameter. My parameter declaration looks likeparam([Parameter(Mandatory=$true, ValueFromPipeline=$true, ValueFromPipelinebyPropertyName=$true)] [System.IO.DirectoryInfo[]] $PsPath)My problem is that the callgci c:\ -Directory | MyScriptresults in only the last element of the result ofgcibeing in the input array. What is wrong here?Thanks in advance,
Christoph
|
Pass directories from pipeline as Powershell named parameter
|
This matches my understanding of how pipelines process objects. Objects travel "as far" in the pipeline as possible before the next item is touched. Some cmdlets don't use the "process" block but instead use the "end" block out of necessity (for example, sort-object has to have all of the items to actually perform the sort) so they block the pipeline. Others use write-output (which your second $_ is doing implicitly) and keep the pipeline moving.
|
In PowerShell, it appears that the order of execution of cmdlets in a pipeline aren't executed in an obvious way. Rather than each cmdlet executing then passing the results to the next cmdlet in the pipeline, it seems that individual output objects of a cmdlet are passed to the input of the next cmdlet before the execution of the previous cmdlet complets. The following confirms this behavior:1..5 | %{ Write-Host $_; $_ } | %{ Write-Host ([char]($_ + 64)) }prints1
A
2
B
3
C
4
D
5
EIt appears what's happening is that each execution of theForEach-Objectcmdlet will execute its script blockand each subsequent command in its pipelinebefore iterating.Is this what actually occurs, and is this behavior documented anywhere? Is this the case for all iterative cmdlets likeForEach-Object(e.g.Where-Object, etc.)?I know I can wrap pieces in an expression ((1..5 | %{ Write-Host $_; $_ }) | %{ Write-Host ([char]($_ + 64)) }) or assign a piece to a variable and then pipe that to subsequent pipeline commands to avoid this behavior, but is there a way to perform an operation on each element of a collection and then pass that entire collection on to the next command in a pipeline?
|
How does the order of execution of the ForEach-Object cmdlet in a pipeline work?
|
Yes, this loop will likely benefit from branch prediction / speculative execution.Loop unrolling by hand is generally considered to be an obsolete optimization, see for example here:https://www.intel.com/content/www/us/en/developer/articles/technical/avoid-manual-loop-unrolling.htmlSpeculative execution does not change the observed behaviour of your program. It does not even require compiler-support since it is something the CPU itself does when it encounters conditional jumps. Whether your iterations will be correctly predicted will depend on what happens inside offooand possibly even the data insrc. Iffoohas too many conditionals or if the conditionals follow hard-to-predict patterns, the speed will be lower.Other optimizations may appear in the code though if the compiler thinks they are beneficial: There might be loop unrolling, there might be SIMD-operations. To see what the compiler actually does with your code you can tryhttps://godbolt.org/
|
Consider below loop (https://godbolt.org/z/z4Wz1aanK) that has no loop-carried dependence. Will modern CPU speculatively execute next iteration with previous one? if true, is loop expansion still necessary here?void bar(void)
{
for (int i = 0; i < 1024; i++)
out[i] = foo(src[i]);
}The result of compilation:bar():
pushq %rbx
xorl %ebx, %ebx
.L2:
movl src(%rbx), %edi
addq $4, %rbx
call foo(int)
movl %eax, out-4(%rbx)
cmpq $4096, %rbx
jne .L2
popq %rbx
ret
src:
.zero 400
out:
.zero 400Update1: Now I am sure speculative execution can cross loop iterations. The question is how far that can be, considering dependency chain introduced by loop counti?
|
Can speculative execution of modern CPUs cross loop iterations?
|
You can run terraform plan like thisterraform plan -detailed-exitcodeIf the exit code == 2, then there are changes present.Source
|
I’m trying to create a pipeline (in Azure Devops) that runs some Terraform code.
The logic of what I’m trying to do is:Run terraform planCheck if terraform plan has changesIf so, prompt user to manually review the changes and then accept / rejectProceed to terraform apply if acceptedI have the terraform plan stage all working, but I'm struggling to identify how to programmatically identify if the output of terraform plan has changes to review or not.I was trying to use the following environment variables that are supposedly set to true/false after running ‘terraform plan’TERRAFORM_PLAN_HAS_CHANGES
TERRAFORM_PLAN_HAS_DESTROY_CHANGESBut it doesn’t appear that they are being set, at least in the Terraform version that I’m using (v1.4.2).What would be the best way of programmatically checking if changes are there to review? Or should I shift my logic?
|
Terraform in a pipeline - how to check for plan changes
|
Right after this I went to investigate testcontainers-go source code and found that that all I had to do was to define in myContainerRequestSkipReaper: true,
|
I'm using bitbucket pipelines to run my Go project tests that use Testcontainers.
Pipelines fail with message:Error response from daemon: authorization denied by plugin pipelines: --mounts is not allowed: creating reaper failed: failed to create containerSo I setexport TESTCONTAINERS_RYUK_DISABLED=truethat I found from Testcontainers Java docs. It doesn't seem to do anything.Usinggo 1.19.2andgithub.com/testcontainers/testcontainers-go v0.15.0
|
Disable RYUK (Testcontainers for Go)
|
Solved it! Just use another VectorAssembler (at the end) before the pipeline:assemblerAll = VectorAssembler(inputCols= ["numericFeatures", "categFeatures"], outputCol="allFeatures")
pipeline = Pipeline(stages = [assembler] + indexers + encoders + [assemblerCateg] + [assemblerAll])
|
Using pyspark, I have created two VectorAssemblers, the first with multiple numeric columns ('colA', 'colB', 'colC'), and the second with multiple categorical columns ('colD', 'colE', I applied OneHotEncoder on each column).I could create these VectorAssemblers separately. How can I combine the outputs into a single vector column (so that I can feed it into a Xgboost model)?I tried the following, but got "TypeError: can only concatenate str (not "list") to str"# my dataframe with all columns is df
# VectorAssembler 1: with 3 numeric columns
numeric_cols = ['colA', 'colB', 'colC']
assembler = VectorAssembler(
inputCols= numeric_cols,
outputCol="numericFeatures"
)
# VectorAssembler 2: with 2 categorical columns
categ_cols = ['colD', 'colE']
indexers = [
StringIndexer(inputCol=c, outputCol="{0}_indexed".format(c))
for c in categ_cols
]
encoders = [
OneHotEncoder(
inputCol=indexer.getOutputCol(),
outputCol="{0}_encoded".format(indexer.getOutputCol()))
for indexer in indexers
]
assemblerCateg = VectorAssembler(
inputCols = [encoder.getOutputCol() for encoder in encoders],
outputCol = "categFeatures"
)
pipeline = Pipeline(stages = [assembler] + indexers + encoders + [assemblerCateg])
df2 = pipeline.fit(df).transform(df)
|
PySpark: combining output of two VectorAssemblers
|
You can check the default location with:import transformers #it is important to load the library before checking!
import os
os.environ['TRANSFORMERS_CACHE']In case you want to change the default location, please have a lock at thisanswer.
|
I'm using the Huggingface's Transformers pipeline function to download the model and the tokenizer, my Windows PC downloaded them but I don't know where they are stored on my PC. Can you please help me?from transformers import pipeline
qa_pipeline = pipeline(
"question-answering",
model="mrm8488/bert-multi-cased-finetuned-xquadv1",
tokenizer="mrm8488/bert-multi-cased-finetuned-xquadv1"
)
|
Transformers pipeline model directory
|
As a rule of thumb, you shouldn't pipeline into an indexer property. Use indexers explicitly.For F# collections modules, there usually exists a lookup function that can be used instead of index access (Array.get,Map.find,Set.contains).
|
How should one pipeline an index into an indexer property of eg.: a Map?let aMap = [
(1,'a')
(2,'b')
] |> Map.ofList
let someChar =
1
|> aMap.ItemIn the example above, I get the error that "An indexer property must be given at least one argument"The following works, but in real code gets kind of ugly and strange looking. Is there a better way? What is the language reason for not accepting the pipelined item for the indexer property? is it because it is a getter property instead of a simple method?let someChar =
1
|> fun index -> aMap.[index]Edit: below is a better representation of the actual usage scenario with the solution I went with inside a transformation pipeline eg.:let data =
someList
|> List.map (fun i ->
getIndexFromData i
|> Map.find <| aMap
|> someOtherFunctionOnTheData
)
|> (*...*)
|
F# Pipeline operator into an indexer property?
|
sklearn already has such a transformer,KBinsDiscretizer(to matchpd.qcut, usestrategy='quantile'). It will differ primarily in how ittransforms test data: theFunctionTransformerversion will "refit" the quantiles, whereas the builtinKBinsDiscretizerwill save the quantile statistics for binning test data. As @m_power notes in a comment, they also differ near bin edges, as well as the format of the transformed data.But to address the error specifically: it means your functionqcutonly applies to a 1D array, whereasFunctionTransformersends the entire dataframe. You can define a thin wrapper aroundqcutto make this work, likedef frame_qcut(X, y=None, q=10):
return X.apply(pd.qcut, axis=0, q=q)(That's assuming you'll get a dataframe in.)
|
I'm working on making a DataFrame pre-processing pipeline using sklearn and chaining various types of pre-processing steps.I wanted to chain aSimpleImputertransformer and aFunctionTransformerapplying apd.qcut(orpd.cut) but I keep getting the following error:ValueError: Input array must be 1 dimensionalHere's my code:from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.pipeline import Pipeline
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import FunctionTransformer
class FeatureSelector(BaseEstimator, TransformerMixin):
def __init__(self, features):
self._features = features
def fit(self, X, y=None):
return self
def transform(self, X, y=None):
return X[self._features]
fare_transformer = Pipeline([
('fare_selector', FeatureSelector(['Fare'])),
('fare_imputer', SimpleImputer(strategy='median')),
('fare_bands', FunctionTransformer(func=pd.qcut, kw_args={'q': 5}))
])The same happens if I simply chain theFeatureSelectortransformer and theFunctionTransformerwithpd.qcutand omit theSimpleImputer:fare_transformer = Pipeline([
('fare_selector', FeatureSelector(['Fare'])),
('fare_bands', FunctionTransformer(func=pd.qcut, kw_args={'q': 5}))
])I searched stackoverflow and google extensively but could not find a solution to this issue. Any help here would be greatly appreciated!
|
Is there a way to chain a pd.cut FunctionTransformer in a sklearn Pipeline?
|
This is the expected behavior. The way permutation importance works is to shuffle the input data and apply it to the pipeline (or the model if that is what you want). In fact, if you want to understand how the initial input data effects the model then you should apply it to the pipeline.If you are interested in the feature importance of each of the additional feature that is generated by your preprocessing steps, then you should generate the preprocessed dataset with column names and then apply that data to the model (using permutation importance) directly instead of the pipeline.In most cases people are not interested in learning the impact of the secondary features that the pipeline generates. That is why they use the pipeline here to encompass the preprocessing and modeling steps.
|
I am using the exact example fromSciKit, which comparespermutation_importancewithtree feature_importancesAs you can see, a Pipeline is used:rf = Pipeline([
('preprocess', preprocessing),
('classifier', RandomForestClassifier(random_state=42))
])
rf.fit(X_train, y_train)permutation_importance:Now, when you fit aPipeline, it willFit all the transforms one after the other and transform the data, then fit the transformed data using the final estimator.Later in the example, they used thepermutation_importanceon the fitted model:result = permutation_importance(rf, X_test, y_test, n_repeats=10,
random_state=42, n_jobs=2)Problem:What I don't understand is that the features in theresultare still the original non-transformed features. Why is this the case? Is this working correctly? What is the purpose of thePipelinethen?tree feature_importance:
In the same example, when they use thefeature_importance, the results are transformed:tree_feature_importances = (
rf.named_steps['classifier'].feature_importances_)I can obviously transform my features and then usepermutation_importance, but it seems that the steps presented in the examples are intentional, and there should be a reason whypermutation_importancedoes not transform the features.
|
Permutation importance using a Pipeline in SciKit-Learn
|
In sklearn's Pipeline, the scaler isn't applied to the target. Only independent varibales (aka features) get scaled.Thus, the use ofTransformedTargetRegressorin your code is not redundant.
|
When using a scaling function insklearn's Pipeline utility, is the scalar applied to the target variable during training and prediction?In other words, is my code below, which makes use ofTransformedTargetRegressorredundant to the pipeline?cowboy = Lasso(max_iter=10000, tol=.005)
climber = Ridge()
gymshorts = ElasticNet()
scaler = pre.RobustScaler()
models = [('xgb', xgb.XGBRegressor(**best_params)),
('ridge', make_pipeline(scaler, TransformedTargetRegressor(climber, scaler))),
('lasso', make_pipeline(scaler, TransformedTargetRegressor(cowboy, scaler))),
('enet', make_pipeline(scaler, TransformedTargetRegressor(gymshorts, scaler)))]
stack = ensemble.StackingRegressor(estimators=models)
stack = stack.fit(x_train, y_train)
|
Does including a scaler in sklearn's Pipeline scale the target variable?
|
A cmdlet that producesnooutput doesn't actually emit$null- it (implicitly) emits the[System.Management.Automation.Internal.AutomationNull]::Valuesingletonthatin expressionsactslike$null, but inenumeration contextssuch as the pipeline enumeratesnothingand therefore sends nothing through the pipeline - unlike anactual$nullvalue.# True $null *is* sent through the pipeline.
PS> $var = $null; $var | ForEach-Object { 'here' }
here
# [System.Management.Automation.Internal.AutomationNull]::Value is *not*.
# `& {}` is a simple way to create this value.
PS> $var = & {}; $var | ForEach-Object { 'here' }
# !! No outputAs of PowerShell 7.0,[System.Management.Automation.Internal.AutomationNull]::Valuecan only be discoveredindirectly, usingobscure techniquessuch as the following:# Only returns $true if $var contains
# [System.Management.Automation.Internal.AutomationNull]::Value
$null -eq $var -and @($var).Count -eq 0This lack of discoverability is problematic, and improving the situation by enabling the following is the subject ofthis GitHub proposal.$var -is [AutomationNull] # WISHFUL THINKING as of PowerShell 7.0
|
I am writing a Chef library to make writing a custom resource for managing Microsoft MSMQ resources on Windows Server easier. Chef interfaces with Windows using Powershell 5.1.I want to raise an error if my call toGet-MsmqQueuefails and returns$Null. To do so, I have created a filter to raise an error if the value is invalid. This seems to work if I pipeline a$Nullvalue, but if the value is returned fromGet-MsmqQueueand is$Null, it does not work.Does anybody have an idea why line #5 does not raise an error, even if the value is equal to$Null?#1 filter Test-Null{ if ($Null -ne $_) { $_ } else { Write-Error "object does not exist" }}
#2 $a = $Null
#3 $a | Test-Null | ConvertTo-Json # this raises an error
#4 $a = Get-MsmqQueue -Name DoesNotExist
#5 $a | Test-Null | ConvertTo-Json # this does NOT raise an error
#6 $Null -eq $a # this evaluates to $True
|
Powershell filter ignored in pipeline
|
As @Kehinde's said in comment, what you want could be achieved by the featureMulti-repo checkout.Note:Multi-repo checkoutis the feature whichonlysupportedYAML. Because what the design logic is Checkouts from multiple repos in combination with YAML builds enable focusing the source level dependency management to one structured descriptor file in Git (the YAML biuld definition) forgood visibility.But for pipeline that configured byclassic UI, you had to add those other repositories/projects assubmodules, or as manual scripts to rungit checkoutin pipeline.For personal, I strongly suggest you use YAML to achieve what you want.Simple sample YAML definition:resources:
repositories:
- repository: tools
name: Tools
type: git
steps:
- checkout: self
- checkout: tools
- script: dir $(Build.SourcesDirectory)Here, image I have a repository called "MyCode" with a YAML pipeline, plus a second repository called "Tools".In above thirdstep(dir $(Build.SourcesDirectory)), it will show you two directories, "MyCode" and "Tools", in the sources directory.Hope this helps.For Bitbucket:resources:
repositories:
- repository: MyBitBucketRepo
type: bitbucket
endpoint: MyBitBucketServiceConnection
name: {BitBucketOrg}/{BitBucketRepo}
trigger:
- master
pool:
vmImage: 'ubuntu-latest'
steps:
- checkout: self
- checkout: MyBitBucketRepo
- script: dir $(Build.SourcesDirectory)
|
I have a project which depends on 2-3 other projects, is there a way to pull them together with the master project?When the build process starts projects will be on the file system and the master project can locate the other dependency projects?
|
azure pipeline pull dependency projects
|
You could use the unixwhichcommand like:veppath=`which vep`
vcf2maf.pl --vep-path $veppath ...[vep path is] stored inside the conda environment which will have a user specific absolute pathThe variableCONDA_PREFIXcontains the path to the current conda environment. so you could also do something like:vcf2maf.pl --vep-path $CONDA_PREFIX/bin/vep ...
|
I'm usingvcf2mafto annotate variants as part of asnakemakepipelinerule vcf2maf:
input:
vcf="vcfs/{sample}.vcf",
fasta=vep_fasta,
vep_dir=vep_dir
output:
"mafs/{sample}.maf"
conda:
"../envs/annotation.yml"
shell:
"""
vcf2maf.pl --input-vcf {input.vcf} --output-maf {output} \
--tumor-id {wildcards.sample}.tumor \
--normal-id {wildcards.sample}.normal \
--ref-fasta {input.fasta} --filter-vcf 0 \
--vep-data {input.vep_dir} --vep-path [need path]
"""Thecondaenvironment has two packages:vcf2mafandvep.vcf2mafrequires a path tovepto run properly, but I'm not sure how to accessvep's path since it's stored inside thecondaenvironment which will have a user specific absolute path. Is there an easy way to getvep's path so I can refer to it for--vep-path?
|
How to refer to executable inside anaconda environment in Snakemake
|
Where should I run tests at build or release pipeline?Indeed, just like the comment of 4c74356b41, it depends on what you test. In general,unit\intergationon build.smoke\UIon release.but is it also possible to run the unit tests at the release ?The answer is yes.According to the official documentVisual Studio Test task:Use this taskin a build or release pipeline to run unit and
functional tests(Selenium, Appium, Coded UI test, and more) using the
Visual Studio Test Runner.But, when you run the unit tests at the release pipeline, you need to use copy task and Publish build artifacts task to copy thedllandtest.dllto the artifacts, so that we could get it in the release pipeline.Then we could test it in the release pipeline.Checkthe similar threadfor some more details.As test, it works fine on my side:Hope this helps.
|
I'm trying to set up a build and release pipeline, but I saw that it is possible to run visual studio tests in a build pipeline as well as in a release pipeline. Does anybody have a advice which one I should choose?
|
Where should I run tests at build or release pipeline?
|
Azure UI testinginvolve some consideration, and is part of a release pipline (as described in "UI test with Selenium") added on top of your existing build pipeline.The idea is: those tests come with their own Visual Studio Unit Test project and code, which can evolve independently of the main project code, at its own pace: it can be hosted in its own Git repository.But the execution of those tests will be based on the deliverable produced by the build of the project itself (done in the build pipeline)
|
I am working on setting up Azure DevOps Git Repo, Build and Release pipelines.We have separate team members responsible for development of application and separate responsible for UI test automation.My question is should I havea) one repo for the Application Code and Integration UI testing with single build pipeline building everything and single release pipeline deploying and running UI testsb) Have two separate repos for application and UI Tests and two separate builds and release pipelines?If you have experience in setting this up which one is preferred method and why?
|
Azure DevOps Git UI Automation same repo or seperate
|
Your brackets are in the wrong place / you are missing brackets when creating the Pipeline, should be a list of tuples:pipeline = Pipeline([
('text_length', AverageWordLengthExtractor()),
('scale', StandardScaler())
])
|
I am trying to create a sklearn pipeline which will first extract the average word length in a text, and then standardize it usingStandardScaler.custom transformerclass AverageWordLengthExtractor(BaseEstimator, TransformerMixin):
def __init__(self):
pass
def average_word_length(self, text):
return np.mean([len(word) for word in text.split( )])
def fit(self, x, y=None):
return self
def transform(self, x , y=None):
return pd.DataFrame(pd.Series(x).apply(self.average_word_length))My goal is to achieve this. X is a pandas series with text values. This works.extractor=AverageWordLengthExtractor()
print(extractor.transform(X[:10]))
sc=StandardScaler()
print(sc.fit_transform(extractor.transform(X[:10])))The pipeline I created for this is.pipeline = Pipeline([('text_length', AverageWordLengthExtractor(), 'scale', StandardScaler())])But thepipeline.fit_transform()producing below error.Traceback (most recent call last):
File "custom_transformer.py", line 48, in <module>
main()
File "custom_transformer.py", line 43, in main
'scale', StandardScaler())])
File "/opt/conda/lib/python3.6/site-packages/sklearn/pipeline.py", line 114, in __init__
self._validate_steps()
File "/opt/conda/lib/python3.6/site-packages/sklearn/pipeline.py", line 146, in _validate_steps
names, estimators = zip(*self.steps)
ValueError: too many values to unpack (expected 2)
|
Sklearn pipeline throws ValueError: too many values to unpack (expected 2)
|
You may create aPIPELINED TABLEfunction.let's say this is your table.create table vw_people ( ID INTEGER, NAME VARCHAR2(10));
INSERT INTO vw_people(id,name) VALUES ( 1,'Knayak');
commit;create an object and a collection of the object typeCREATE OR REPLACE TYPE vw_people_typ AS OBJECT( ID INTEGER,NAME VARCHAR2(10));
CREATE OR REPLACE TYPE vw_people_tab AS TABLE OF vw_people_typ;This is your functionCREATE OR REPLACE FUNCTION testpowerbi RETURN vw_people_tab
PIPELINED
AUTHID current_user
AS
vwt vw_people_tab;
PRAGMA autonomous_transaction;
BEGIN
sar.pk_sar_enable_roles;
commit;
SELECT
vw_people_typ(id,name)
BULK COLLECT
INTO vwt
FROM
vw_people;
FOR i IN 1..vwt.count LOOP
PIPE ROW ( vw_people_typ(vwt(i).id,vwt(i).name) );
END LOOP;
END testpowerbi;
/Query the output of the function as aTABLEselect * from TABLE(TestPowerBI);
ID NAME
---------- ----------
1 Knayak
|
I have a SAR protected oracle database from which I need to expose a table to PowerBI.I am not familiar with PLSQL.I have managed to expose a column of the table to PowerBI.Help is needed in 2 areas1) I require help from you guys to return selective columns from the table2) I require help from you guys to return all the columns from the tableDROP TYPE testarr;
CREATE OR REPLACE TYPE testarr IS TABLE OF VARCHAR2(70);
/
GRANT EXECUTE ON testarr TO public;
DROP FUNCTION TestPowerBI
CREATE OR REPLACE FUNCTION TestPowerBI
RETURN testarr AUTHID CURRENT_USER AS
o_recorset SYS_REFCURSOR;
arr testarr := testarr();
pragma autonomous_transaction;
BEGIN
sar.pk_sar_enable_roles.............
commit;
OPEN o_recordset FOR
SELECT NAME FROM vw_people;
FETCH o_recordset BULK COLLECT INTO arr;
CLOSE o_recordset;
RETURN arr;
END TestPowerBI
Grant execute on TestPowerBi to public;
|
How do I return multiple columns using plsql
|
"Rerun with upstream in pipeline" basically means "recalculate with all dependencies". For example, if one haspipeline1 -> dataset1 -> pipeline2and tries to rerunpipeline2with dependecies, thenpipeline1andpipeline2will be both executed. I believe it works same with several chained activities within single pipeline.
|
I am defining a pipeline in data factory, I had some errors that I correct.
The first activity is calling an usql script to do some aggregation, I changed the script plenty of time but the error is still:[{"errorId":"E_CSC_USER_SYNTAXERROR","severity":"Error","component":"CSC","source":"USER","message":"syntax
error. Final statement did not end with a semicolon","details":"at
token 'usql', line 4\r\nnear the ###:\r\n**************\r\nCLARE
@lineitemsfile string =
\"/datalakerepo/input/2016/01/01lineitems.txt\";\nDECLARE @ordersfile
string = \"/datalakerepo/input/2016/01/01orders.txt\";\nsales.usql ###
\n","description":"Invalid syntax found in the
script.","resolution":"Correct the script syntax, using expected
token(s) as a
guide.","helpLink":"","filePath":"","lineNumber":4,"startOffset":228,"endOffset":232}].seem like not all usql script is read from the data factory, so I though that may be the "rerun in upstream in pipeline" have something to do with this, like clear cache from previous script.Anyone knows what "rerun in upstream in pipeline" does?
Many thanks!
|
What does "rerun in upstream in pipeline"?
|
You can use a combination ofFeatureUnionand custom transformers that take only the variable you're interested in.However, you're right in that sklearn does not handle heterogeneous feature sets particularly well. There is a librarysklearn-pandaswhich makes it a lot easier, letting you define separate pipelines for specific columns of a pandas dataframe.
|
Readingscikit-learn doc on Pipeline, all the examples apply the transformers on the entire dataset (e.g.StandardScaler,PCA).Is it possible to, say, only scale a specific variable in the dataset? If this is possible, then I can put my entire feature engineering process into a Pipeline and apply it on both my train and test sets.
|
Can I use scikit-learn pipeline to transform a specific variable only?
|
You need to use some kind of queue to accumulate and rotate enough previous pipeline items:function Window {
param($Size)
begin {
$Queue = [Collections.Queue]::new($Size)
}
process {
$Queue.Enqueue($_)
if($Queue.Count -eq $Size) {
@(
,$Queue.ToArray()
[void]$Queue.Dequeue()
)
}
}
}And you can use it like this:1..10 | Window 4 | Window 3 | Format-Custom -Expand CoreOnly
|
I was looking for a "window" function like F#'sSeq.windowedor the Reactive extensionsWindow. It looks like it would be provided by the likes ofSelect-Object(which already has take / skip functionality), but it is not.If nothing is readily available, any ideas on implementing "window" without unnecessary procedural looping?I'm looking for something that works with the PowerShell pipeline nicely.
|
Does PowerShell have a "window" function?
|
As this is defined in the architecture, you will always have the guarantee that if a branch is mispredicted and it has to flush the pipeline, all following instructions cannot have a visible impact on the architecture.There are several ways to do it: in simple implementation (short pipelines), instructions will generally be committed (i.e. write an architecturally visible modification) when it is guaranteed that it is not in a branch (A load that can fault) shadow anymore.In more complex CPUs, longer pipelines and out-of-order cores, the technique that is used generally use Register Renaming:https://en.wikipedia.org/wiki/Register_renamingIn this case, the instructions will be able to complete, write the result to a temporary register or location, and the CPU will have mechanisms to either restore the state (In case of flush) or only commit the temporary results to architectural registers when it has the guaranty these results cannot be flushed anymore.
|
This is a theoretical question and I feel stuck.Suppose I take ARM ISA and pipelined datapath. I am using a branch predictor, which for simplicity, always predicts that a branch is taken. As it is evident, it works if the branch was indeed to be taken, but fails otherwise. If it fails, it has to roll back and undo all the changes , i.e. flush the pipeline.How is it supposed to do so?What if some value is written to some register?Then how can I bring that register to its previous value? Same thing goes for flags?
|
How to flush pipeline?
|
I don't think that this is a matter of which can accept an array and which can't, this is more an issue of what cmdlets accept piped input, and which don't, and for the ones that do accept it, how do they use the objects piped into them. For the most part you will want to look at the help for whatever cmdlet you are interested in to see if it accepts piped input.Now,Where-Objectis designed to work on an array of objects piped into it, so it doesn't specifically state that it does, and assumes that the user is going to be able to figure that out.New-Itemactually does take an array of objects to be piped into it, but the parameter that accepts that is the-Valueparameter, so that may be a bit less useful for most people. Your example above shows that you want to specify the-Nameparameter, which does not accept piped values.So the answer to your question is that anything that accepts piped input will allow you to pipe an array of the correct object type into it and it will process each item in turn, the challenge can be in finding if or how an object will accept piped information.For your specific example, I personally think thatNew-Itemshould have accepted piped strings to accommodate the creation of either directories or empty files, or allow piped objects/hashtables to allow for the creation of more complex things.
|
I've been trying to learn PowerShell and am currently looking at piping the output of one command into the input of another. I've found that some commands can operate on an array of items that are piped in while other commands can only operate on a single item in an array.For example,Where-Objectcan operate on an array of items, returning a subset of them, whereasNew-Itemcan only operate on a single item.I have to handle the two types of commands differently, with commands that can only operate on a single item having to be enclosed in aForEach-Objectblock.For intance,Where-Objectdoesn't need aForEach-Object:Get-Service | where Status -eq "Stopped"ButNew-Itemdoes:@("Red", "Blue", "Green") | foreach {new-item -Name $_ -ItemType directory}How can I determine whether a particular command can act on an array of items, or whether it can only act on a single item?As far as I can see the help doesn't really give a clue. For example the help forWhere-Objectstates the input type isSystem.Management.Automation.PSObject, which doesn't imply to me it can be an array of objects.EDIT:aljodAv has pointed out in his answer thatForEach-Objectis not actually needed in the second example. It can be written as:@("Red", "Blue", "Green") | new-item -Name {$_} -ItemType directoryThe question still stands, however, as $_ represents the current item from the collection object passed through the pipeline.
|
Which commands operate on a single item and which take an array of objects?
|
+50Promoted Builds Pluginhttps://wiki.jenkins-ci.org/display/JENKINS/Promoted+Builds+PluginYou can use thePromoted Builds Pluginwhich has a manual promotion workflow.You could have:[Project]-->[Project Deploy Test]-->[Project UA Test][Project UA Test]--(manual promotion)-->[Project Deploy Prod]Explanation: business as usual until the user acceptance tests are complete. When complete, you can do the manual promotion process. The promotion process can be configured to kick off a downstream build; so in effect your pipeline resume.Delivery Pipeline Plugin(note: I haven't played with this plugin, so I'm just guessing)https://wiki.jenkins-ci.org/display/JENKINS/Delivery+Pipeline+PluginTheDelivery Pipeline Pluginlets you configure a job to have a postbuild action, which is a manual trigger, and lets you resume your pipeline.Write your own?Conceptually, to break your pipeline and have a user "confirm" a build is good, the build needs to provide an action that can be performed after the build is complete. E.g.KeepBuildForEverAction(keep build forever)ClaimBuildAction(claim plugin)
|
I have a project & pipeline set up within my jenkins instance which looks like this.This can be described as;[Project]- Build the project[Project Deploy Test]- Deploys the project to a test server[Project UA Test]- The User Acceptance step, where the user must manually test and accept[Project Deploy Prod]- Once the user has accepted the UA Test the build is deployed to productionAll steps are working well, except the[Project UA Test]step.
This step should just be a button or something which the user can manually trigger once he or she is happy with the build.The question is,How can I configure this step to enforce some user interaction (like clicking a button) before proceeding to the next step?I have tried making the build parametrised with a Choice Parameter, but I'm not sure I'm doing the right thing.
|
Jenkins User Acceptance step in pipeline
|
Interesting, I had the same problem with Windows 8.I tried this in a command prompt:echo y | cacls.exe [options]...and it did NOT work.Then I tried:echo y| cacls.exe [options]...and it DID work. Note I had to remove the space after the 'y' and that fixed it.Hard to believe but I just did it now on Windows 8.1.
|
I want to use CACLS.EXE in a batch file with auto answer but have had no success. Following a Microsoft article did not solved my problem (http://support.microsoft.com/kb/135268).Batch file is:cacls ALF.exe /d everyoneIf I use it, it asks Y or N questions. I have tried 2 variants to auto answer the questions:echo y| cacls ALF.exe /d everyone (Doesn't work)
cacls ALF.exe /d everyone < yes.txt (Doesn't work)I use Windows Ultimate x64. How can I solve that?
|
Using CACLS.EXE in a batch file with auto answer
|
Three approaches that come readily to mind are:Use operating-system pipelines:xsltproc ss1.xsl input.xml \\
| xsltproc ss2.xsl - \\
| xsltproc ss3.xsl - \\
> output.xmlThe primary downside I'm aware of here is that not all processors have command-line interfaces that make it easy to read the main input tree on stdin. So when I do this, I sometimes end up writing temporary files; fortunately, disk space is cheap. Upside: you probably already know how to do this.Use XProc pipelines.Primary downside: you have to learn a new technology. Primary upside: you get to learn a new technology, which is actually quite cool.Define different modes for the different operations and use XSLT 2.0 (or an XSLT 1.0 processor with some form of the node-set extension) to process the data:<xsl:template match="/">
<xsl:variable name="tree1">
<xsl:apply-templates mode="mode1"/>
</xsl:variable>
<xsl:variable name="tree2">
<xsl:apply-templates mode="mode2" select="$tree1"/>
</xsl:variable>
<xsl:apply-templates mode="mode3" select="$tree2"/>
</xsl:template>Upside: it's all in a single stylesheet, so you never have to puzzle out how to run the process, when you come back to it six months later. (And the phrasing of your question says that this is the answer you really want.) Downside: it's all in a single stylesheet, so you have to work harder to achieve modularity and separation of concerns.There are doubtless other approaches as well.
|
This is a follow up on my earlier questionxslt split mp3 tag into artist and titleI'll try to phrase it in a generic terms because I think it will me allow for a better understanding of XSLT: what and how use it using the appropriate XSLT idoms.This is what I want:input XML -> intermediate XML -> ... -> final transformationOr in other words: how can I pipeline various XML transformationsin one XSLT document?My command-line analogy would be to have multiple command line tools that perform parts of the solution, then have them execute in succeeding order using pipes.In this specific case:input XML (with element) -> intermediate XML (with separate and element) -> final XML sorted by ,I'm limited to one XSLT document as the web-tool at hand does not even allowxsl:includeorxsl:importto succeed.
|
xlst: from base xml to intermediate xml to final output
|
Both the scripts might be in the same folder, but.\test2.ps1will look for test2.ps1 in the same folder as the calling application, which is the C# app.Have this in test.ps1:$scriptDir = Split-Path -parent $MyInvocation.MyCommand.Path
.$scriptdir\test2.ps1
|
I have this powershell script (test1.ps1) calling another powershell script(test2.ps1) to do the job.Both scriptfiles are in the same foldertest1.ps1echo "from test1.ps1"
.\test2.ps1test2.ps1echo "from test2.ps1"When I invoke test1.ps1 in C# by creating runspace, adding commands to pipeline and invoking it, I get an error message saying"The term '.\test2.ps1' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again."
|
Powershell scripts invoked by C# application
|
You need to wrap theggplotcall in aprintso you get both calls. You can use the following code:library(tidyverse)
#> Warning: package 'ggplot2' was built under R version 4.1.2
#> Warning: package 'tibble' was built under R version 4.1.2
#> Warning: package 'tidyr' was built under R version 4.1.2
#> Warning: package 'readr' was built under R version 4.1.2
#> Warning: package 'dplyr' was built under R version 4.1.2
library(magrittr)
#> Warning: package 'magrittr' was built under R version 4.1.2
#>
#> Attaching package: 'magrittr'
#> The following object is masked from 'package:purrr':
#>
#> set_names
#> The following object is masked from 'package:tidyr':
#>
#> extract
mtcars %T>%
{print(ggplot(., aes(x = cyl, y = mpg))+
geom_line())} %>%
group_by(cyl) %>%
summarise(mpg = mean(mpg))#> # A tibble: 3 × 2
#> cyl mpg
#> <dbl> <dbl>
#> 1 4 26.7
#> 2 6 19.7
#> 3 8 15.1Created on 2022-06-30 by thereprex package(v2.0.1)
|
Let's take a look at the following two pieces of code:mtcars %>%
ggplot(aes(x = cyl, y = mpg))+
geom_line()This works and creates the following plot:Now let's look at this:mtcars %>%
group_by(cyl) %>%
summarise(mpg = mean(mpg))This also works and creates the following output:# A tibble: 3 x 2
cyl mpg
<dbl> <dbl>
1 4 26.7
2 6 19.7
3 8 15.1However, this doesn't work:mtcars %T>%
ggplot(aes(x = cyl, y = mpg))+
geom_line() %>%
group_by(cyl) %>%
summarise(mpg = mean(mpg))It gives the following error:Error in UseMethod("group_by") :
no applicable method for 'group_by' applied to an object of class "c('LayerInstance', 'Layer', 'ggproto', 'gg')"Why doesn't it work? From the%T>%documentation, I would expect that the left-hand side object, in this case,mtcars, would be returned afterggplot. Unfortunately that doesn't seem to work. Did I misunderstand the%T>%pipe? How is the code supposed to look like to make this work?
|
How to use the %T>% pipe after ggplot?
|
If I understand you correctly, you want to add a new column based on a given column, e.g.X2. You need to pass this column as an additional argument to the function usingkw_args:import pandas as pd
from sklearn.preprocessing import FunctionTransformer
from sklearn.pipeline import Pipeline
df = pd.DataFrame(columns=['X1', 'X2', 'X3'], data=[
[1,16,9],
[4,36,16],
[1,16,9],
[2,9,8],
[3,36,15],
[2,49,16],
[4,25,14],
[5,36,17]])
def feat_comp(x, column):
x[f'100-{column}'] = 100 - x[column]
return x
pipe_text = Pipeline([('col_test', FunctionTransformer(feat_comp, validate=False, kw_args={'column': 'X2'}))])
pipe_text.fit_transform(df)Result:X1 X2 X3 100-X2
0 1 16 9 84
1 4 36 16 64
2 1 16 9 84
3 2 9 8 91
4 3 36 15 64
5 2 49 16 51
6 4 25 14 75
7 5 36 17 64(in your exampleFunctionTransformer(feat_comp, 'X2',validate=False)X2would be theinverse_funcand the stringX2is not callalble, hence the error)
|
I have a sample data:df = pd.DataFrame(columns=['X1', 'X2', 'X3'], data=[
[1,16,9],
[4,36,16],
[1,16,9],
[2,9,8],
[3,36,15],
[2,49,16],
[4,25,14],
[5,36,17]])I want to create two complementary columns in my df based on x2 ad X3 and include it in the pipeline.I am trying to follow the code:def feat_comp(x):
x1 = 100-x
return x1
pipe_text = Pipeline([('col_test', FunctionTransformer(feat_comp, 'X2',validate=False))])
X = pipe_text.fit_transform(df)It gives me an error:TypeError: 'str' object is not callableHow can I apply the function transformer on selected columns and how can I use them in the pipeline?
|
FunctionTransformer & creating new columns in pipeline
|
you can do it with the help of$functionaggregation:{
anId: {
$function: {
body: function() {
return new ObjectId();
},
args: [],
lang: 'js'
}
}
}
|
As last stage of my pipeline, I have a$projectlike this :{
...
anId : new ObjectId()
}But mongo is generating the same Id for each document. I want it to generate a new different Id for each projected document. How to do so within the pipeline?
|
MongoDB generates same ObjectId with new ObjectId in pipeline's $project stage
|
Multi-project pipelinesare only supported for paid versions.Introduced in GitLab Premium 9.3.Available in GitLab Premium, GitLab.com Silver, and higher tiersIf you use free GitLab Enterprise Edition then trigger is not supported in it.You can check your Gitlab version by going to help page<Gitlab url>/help
|
The example for the multi project pipeline with mirroring status (https://docs.gitlab.com/ee/ci/multi_project_pipelines.html#mirroring-status-from-triggered-pipeline) doesn't work:trigger_job:
trigger:
project: my/project
strategy: dependLeads to an error:
"This GitLab CI configuration is invalid: jobs:trigger_pipeline_in_another_repo config contains unknown keys: trigger"config:trigger_pipeline_in_another_repo:
stage: trigger_pipeline_in_b
script:
- apt-get update && apt-get upgrade -y
- apt-get install curl -y
- curl --request POST --form "token=$CI_JOB_TOKEN" --form ref=master http://35.184.231.241/api/v4/projects/8/trigger/pipeline
trigger:
project: root/isolated_pipeline
strategy: depend
|
Multi-project pipeline with trigger , example from the gitlab docs doesn't work
|
from copy import deepcopy
estimator_deep_copy = deepcopy(pipeline)Note that the purpose ofcloneis to get an unfitted/clean estimator.
|
Suppose I have defined a sklearn Pipeline structure. I need to deep-copy its structure and data into another variable so that when refitting the original one, the new variable does not change. I tried to useclonefromsklearn.basein a similar way to the following code:temp_pipe = Pipeline([
('Scaler', StandardScaler()),
('LinearRegression', LinearRegression())]);
for i in iterations:
temp_pipe.fit(X,y);
....
if check_condition:
final = clone(temp_pipe);but it seems to do a deep copy of the structure, not of the data as statedhere:Clonedoes a deep copy of the model in an estimator without actually
copying attached dataI know can do something like:final = Pipeline([
('Scaler', StandardScaler()),
('LinearRegression', LinearRegression())]);
for i in iterations:
temp_pipe = clone(final)
temp_pipe.fit(X,y);
....
if check_condition:
final = temp_pipe;but is there a way to deep-copy also the fitted data?
|
How to deep copy structure and data of a sklearn Pipeline structure into a new variable?
|
There is the option:heroku ci:config:set FROM_EMAIL=$(FROM_EMAIL) --pipeline=my-pipeline
|
If I want to update environment config variables for a heroku app, I would run the following command:heroku config:set FROM_EMAIL=$(FROM_EMAIL) --app=my-appSay I have a pipeline which I would like to add the same environment variable. What would be the equivalent command to update my Heroku CI config variables.I have had a look at all the available options to me by running:heroku help pipelinesNothing seems to facilitate this. I have looked at adding these environment variables to individual pipeline apps, but it does not change the pipeline settings.I need to configure lots of different config variables and need to be able to do this via the command line, doing it manually would be too problematic.
|
Updating heroku pipeline environment variables
|
sklearn-pipelinehas some nice features. It perform several task in a very clean way. We define ourfeatures, itstransformationandlist of classifiers, we want to perform, all in one function.In the first step of thispipeline = Pipeline([
('features',feats),
('classifier', RandomForestClassifier(random_state = 42)),
])you have defined the features's name and its transformation function(that is incorporated infeat), in second step, you have defined the classifier's name and classifier classifier.Now while callingpipeline.fit, it first fit features and transform it, then fit the classifier on the transformed features. So, it does some steps for us. More you cancheck-here
|
In this pagehttps://www.kaggle.com/baghern/a-deep-dive-into-sklearn-pipelinesIt callsfit_transfromfor tranforming the data as follows:from sklearn.pipeline import FeatureUnion
feats = FeatureUnion([('text', text),
('length', length),
('words', words),
('words_not_stopword', words_not_stopword),
('avg_word_length', avg_word_length),
('commas', commas)])
feature_processing = Pipeline([('feats', feats)])
feature_processing.fit_transform(X_train)While during training with feature processing, it only usesfitthenpredictfrom sklearn.ensemble import RandomForestClassifier
pipeline = Pipeline([
('features',feats),
('classifier', RandomForestClassifier(random_state = 42)),
])
pipeline.fit(X_train, y_train)
preds = pipeline.predict(X_test)
np.mean(preds == y_test)The question is, is thefitdoing the transformation onX_train(as what is achieved bytransform, since we are not callingfit_transformhere) for second case?
|
fit vs fit_transform in pipeline
|
If this is a one off move, then export the RM template and import it to the other data factory remembering to change the parameters as appropriate (like the name).If you have a self hosted Integration Runtime, you'll need to fix the IR reference once it is imported because it will replicate the IR but that IR should be linked to the original or register its own IR.If you combine Wang's suggestion and have a self hosted IR, then I'd monitor my postherefor some issues I am having with that.M.
|
What is the easiest way of moving a pipeline across from an Azure Data Factory V2 to another?Both ADF V2 are in the same resource group.
|
How do I move a pipeline from an Azure Data Factory V2 to another (same Resource Group)?
|
You either need to use Add-ons from Atlassian Marketplace or use Confluence REST API to develop your own script.Add-ons which is available in Marketplace will work out of the box but of course you don't have flexibility to change them. Your own script has an advantage of flexibility.Depends on what you want to achieve it might have different approach but base on what you mentioned, if you want to update an existing page, you can call Confluence REST API for update existing page and do the changes that you want. For example following update an existing page in Confluence:curl -u admin:admin -X PUT -H 'Content-Type: application/json' -d'{"id":"3604482","type":"page",
"title":"new page","space":{"key":"TST"},"body":{"storage":{"value":
"<p>This is the updated text for the new page</p>","representation":"storage"}},
"version":{"number":2}}' http://localhost:8080/confluence/rest/api/content/3604482 | python -mjsoTake a look at Atlassian Marketplace for the add-ons and alsoConfluence REST API Examplesfor further details.
|
I was wondering if there is a way to connect GitLab with Confluence. I want to update Confluence-Pages with a pipeline every time something is being pushed into a Gitlab-Project.
|
Can I automatically update Confluence-Pages via actions on Gitlab?
|
No - to quote the proposal:The pipeline operator is essentially a useful syntactic sugar on a function call with a single argument. In other words, sqrt(64) is equivalent to 64 |> sqrt.So your example would effectively just end up desugaring toavg(tail), which isn't what you want.That said, there are also two separate proposals to add a composition operator to the language:https://github.com/TheNavigateur/proposal-pipeline-operator-for-function-compositionhttps://github.com/isiahmeadows/function-composition-proposal
|
Will the pipeline operator enable the composition of functions?const sum = (nos)=> nos.reduce((p,c)=> p + (+c), 0);
const avg = (nos)=> sum(nos) / nos.length;
const tail = ([_, ...tail])=> tail;
const tailAndAverage = tail |> avg; // valid?IstailAndAveragea function in the above code?
|
The pipeline operator in JavaScript
|
Instead of doing a range over the channel, your first select case should be from that channel, with the whole thing inside an infinite loop.func BatchEvents(inChan <-chan *Event) <-chan *Event {
batchSize := 10
comboEvent := Event{}
go func() {
defer close(out)
i = 0
for {
select {
case event, ok := <-inChan:
if !ok {
return
}
comboEvent.data = append(comboEvent.data, event.data)
i++
if i == batchSize {
out <- &comboEvent
// reset for next batch
comboEvent = Event{}
i = 0
}
case <-time.After(5 * time.Second):
// process whatever we have seen so far if the batch size isn't filled in 5 secs
if i > 0 {
out <- &comboEvent
}
// stop after
return
}
}
}()
return out
}
|
I'm reading the pipelines tutorial online and trying to construct a stage that operates like this --Batches up incoming events in batches of 10 each before sending them to the out chanIf we haven't seen 10 events in 5 seconds, combine as many as we received and send them, closing the out chan and returning.However, I have no idea what would the first select case would look like.Tried multiple things but couldn't get past this.
Any pointers much appreciated!func BatchEvents(inChan <- chan *Event) <- chan *Event {
batchSize := 10
comboEvent := Event{}
go func() {
defer close(out)
i = 0
for event := range inChan {
select {
case -WHAT GOES HERE?-:
if i < batchSize {
comboEvent.data = append(comboEvent.data, event.data)
i++;
} else {
out <- &comboEvent
// reset for next batch
comboEvent = Event{}
i=0;
}
case <-time.After(5 * time.Second):
// process whatever we have seen so far if the batch size isn't filled in 5 secs
out <- &comboEvent
// stop after
return
}
}
}()
return out
}
|
How to batch items in golang pipeline stages using channels?
|
Even though airflow is a very useful and highly customizable due to nature of written fully in python; the same thing was an issue I have been struggling with.Unfotunately Airflow Web UI is only cpable of using UTC time. It is been indicated on their official documentation too.In,https://airflow.apache.org/timezone.htmlyou'll findPlease note that the Web UI currently only runs in UTC.Probably a custom implementation is required to do that.
I will post here when this feature is added or if I would find a solution.
|
I'm starting out with apache airflow and have a DAG up and running.Now I'm using the web ui to monitor the DAG runs and I cannot understand the time shown on the tree view page. This image illustrates my situation:I don't understand the time axis shown on marker (3) on the image. The ui's time seems to be in UTC, as shown on marker (1). The selected interval is set to2017-07-11 12:06:25as shown on marker (2). The DAG runs depicted on the image have executed at around 11h 12h, so I cannot understand why it says 07PM on the temporal axis in the UI (marker 3). Is this expected behavior? Is the UI picking up another time zone for the diagram?Apart from this everything seems to be working OK.
|
airflow - how to set the time on the web ui's tree view to UTC?
|
@ChristianDean in hiscommentanswered your first question quite nicely, so I'll answer your second.I do believe it is possible - you can usetheFile'sclosedattribute and raise aStopIterationexception if the file was closed. Like this:def tail(theFile):
theFile.seek(0, 2)
while True:
if theFile.closed:
raise StopIteration
line = theFile.readline()
...
yield lineYour loop shall cease when the file is closed and the exception is raised.A more succinct method (thanks, Christian Dean) that does not involve an explicit exception is to test the file pointer in the loop header.def tail(theFile):
theFile.seek(0, 2)
while not theFile.closed:
line = theFile.readline()
...
yield line
|
Problem:Program to read the lines from infinite stream starting from its end of file.#Solution:import time
def tail(theFile):
theFile.seek(0,2) # Go to the end of the file
while True:
line = theFile.readline()
if not line:
time.sleep(10) # Sleep briefly for 10sec
continue
yield line
if __name__ == '__main__':
fd = open('./file', 'r+')
for line in tail(fd):
print(line)readline()is a non-blocking read, with if check for every line.Question:It does not make sense for my program running to wait infinitely, after the process writing to file hasclose()1) What would be the EAFP approach for this code, to avoidif?2) Can generator function return back onfileclose?
|
Reading infinite stream - tail
|
I'm not entirely sure if the F# library functions forResultimplement all you need here - thebindoperation lets you sequentially compose multiple operations and it stops at the first error.In your case, you want to run one of two possible functions and then you want to do two operations and collect the errors they generate. To do this, you'd probably need to define moreResultfunctions. Something like this does the trick (I changed the code to collectlistsof errors):module Result =
let either f1 f2 =
match f1 () with
| Ok res -> Ok res
| Error e1 ->
match f2 () with
| OK res -> Ok res
| Error e2 -> (e1 @ e2)
let both res1 res2 =
match res1, res2 with
| Ok r1, Ok r2 -> Ok (r1, r2)
| Error e1, Error e2 -> Error (e1 @ e2)
| Error e, _ | _, Error e -> Error eNow you can express your logic as follows:let init data directoryPath =
Result.either
(fun () -> createDirectory directoryPath)
(fun () -> createDirectory2 directoryPath)
|> Result.bind (fun directory ->
Result.both (createFile directory directoryPath) (getPermissionType directory))
|> Result.map (fun (filePath, permissionType) ->
saveData data directory filePath permissionType))
|
Result typeis a new feature in F# 4.1:type Result<'T,'TError> =
| Ok of 'T
| Error of 'TError
bind : ('T -> Result<'U, 'TError>) -> Result<'T, 'TError> -> Result<'U, 'TError>How can I useResult.bindfunction to chain continuous functions for the example below?Assuming I want to save some data to a file. If it succeeds, it should return the saved data, or error string if it fails:Firstly, I try to create a directory usingeithercreateDirectoryorcreateDirectory2. Then, I attempt to create a file usingbothcreateFileandgetPermissionTypefunctions. Finally, I save the data to the file.let init data directoryPath =
(match createDirectory directoryPath with
| Ok directory -> Ok directory
| Error err1 ->
match createDirectory2 directoryPath with
| Ok directory -> Ok directory
| Error err2 -> Error (err1 + "; " + err2))
|> function
| Ok directory ->
match (createFile directory directoryPath), (getPermissionType directory) with
| Ok filePath, Ok permissionType ->
Ok (saveData data directory filePath permissionType)
| Error err1, Ok _ -> Error err1
| Ok _, Error err2 -> Error err2
| Error err1, Error err2 -> Error (err1 + "; " + err2)
| Error err -> err
|
Use bind to chain continuous functions
|
Please read carefully the documentation. Every classifier is parametrized by the features column (featuresCol). It doesn't consider any other column or the order of columns.
|
A sparkVectorAssemblerhttp://spark.apache.org/docs/latest/ml-features.html#vectorassemblerproduces the following outputid | hour | mobile | userFeatures | clicked | features
----|------|--------|------------------|---------|-----------------------------
0 | 18 | 1.0 | [0.0, 10.0, 0.5] | 1.0 | [18.0, 1.0, 0.0, 10.0, 0.5]as you can see the last column contains all the previous features. Is it better / more performant if the other columns are removed e.g. only the label/id and features are retained or is this an unnecessary overhead and just feeding label/id and features into the estimator is enough?What happens when theVectorAssembleris used in a pipeline? will only the last features be used or will it introduce colinearity (duplicate columns) if the original columns are not removed manually?
|
spark pipeline vector assembler drop other columns
|
Shifting left by n bits is the same thing as multiplying the number by 2n. Shifting left 2 bits multplies by 4.If your branch offset is shifted left by 2, that means your branch offset operand is in whole instruction units, not bytes. So a branch instruction with an 8 operand means, jump 8 instructions, which is 32 bytes.MIPS multiplies by 4 because instructions are always 32 bits. 32 bits is 4 bytes.MIPS Instructions are guaranteed to begin at an address which is evenly divisible by 4. This means that the low two bits of the PC is guaranteed to be always zeros, so all branch offsets are guaranteed to have 00 in the low two bits. For this reason, there's no point storing the low two bits in a branch instruction. The MIPS designers were trying to maximize the range that branch instructions can reach.PC means "Program Counter". The program counter is the address of the current instruction. PC+4 refers to the address 4 bytes past the current instruction.Branch OffsetMIPS branch offsets, like most processors, are relative to the address of the instructionafterthe branch. A branch with an immediate operand of zero is a no-op, it branches to the instruction after the branch. A branch with a sign extended immediate operand of -1 branches back to the branch.The branch target is at((branch instruction address) + 4 + ((sign extended branch immediate operand) << 2)).
|
I am working on datapaths and have been trying to understand branch instructions.So this is what I understand. In MIPS, every instruction is 32 bits. This is 4 bytes. So the next instruction would be four bytes away.In terms of example, I say PC address is 128. My first issue is understanding what this 128 means. My current belief is that it is an index in the memory, so 128 refers to 128 bytes across in the memory. Therefore, in the datapath it always says to add 4 to the PC. Add 4 bits to the 128 bits makes 132, but this is actually 132 bytes across now (next instruction). This is the way I make sense of this.In branch equals, say the offset is a binary number of 001. I know I must sign extend, so I would add the zeroes (I will omit for ease of reading). Then you shift left two, resulting in 100. What is the purpose of the shift? Does the offset actually represent bytes, and shifting left will represent bits? If so, adding this to the PC makes no sense to me. Because if PC refers to byte indeces, then adding the offset shifted left two would be adding the offset in number of bytes to PC that is in number of bytes. If PC 128 actually refers to 128 bits, so 32 bytes, then why do we only add 4 to it to get to the next instruction? When it says PC+4, does this actually mean adding 4 bytes?My basic issue is with how PC relative addressing works, what the PC+4 means, and why offset is shifted by 2.
|
Assembly PC Relative Addressing Mode
|
This is not a bug. The main reason that you add the scaler to the pipeline is to prevent leaking the information from your test set to your model. When you fit the pipeline to your training data, theMinMaxScalerkeeps the min and max of your training data. It will use these values to scale any other data that it may see for prediction. As you also highlighted, this min and max are not necessarily the min and max of your test data set! Therefore you may end up having some negative values in your training set when the min of your test set is smaller than the min value in the training set. You need a scaler that does not give you negative values. For instance, you may usesklearn.preprocessing.StandardScaler. Make sure that you set the parameterwith_mean = False. This way, it will not center the data before scaling but scales your data to unit variance.
|
This is a very small sklearn snipplet:logistic = linear_model.LogisticRegression()
pipe = Pipeline(steps=[
('scaler_2', MinMaxScaler()),
('pca', decomposition.NMF(6)),
('logistic', logistic),
])
from sklearn.cross_validation import train_test_split
Xtrain, Xtest, ytrain, ytest = train_test_split(X, y, test_size=0.2)
pipe.fit(Xtrain, ytrain)
ypred = pipe.predict(Xtest)I will get this error:raise ValueError("Negative values in data passed to %s" % whom)
ValueError: Negative values in data passed to NMF (input X)According to this question:Scaling test data to 0 and 1 using MinMaxScalerI know this is becauseThis is due to the fact that the lowest value in my test data was
lower than the train data, of which the min max scaler was fitBut I am wondering, is this a bug?
MinMaxScaler (all scalers) seems should be applied before I do the prediction, it should not depends on previous fitted training data, am I right?Or how could I correctly use preprocessing scalers with Pipeline?Thanks.
|
How can I correctly use Pipleline with MinMaxScaler + NMF to predict data?
|
while read filename; do
lzop -fdc "$filename" | python lineprocessor.py
done < filenames.txt >> output.txt
|
I have a long list of filenames in filenames.txt file. These files are lzo compressed and I use lzop to decompress them for further processing in a pipeline.cat filenames.txt | (xargs lzop -dc || true) | python lineprocessor.py > output.txtSo filenames are input to the lzop -dc line by line. Then they are decompressed and piped into the lineprocessor.py script that I have written. Finally the output of lineprocessor.py is written in output.txt.The problem is that some files in filenames.txt are not properly compressed and lzop crashes and so does the whole pipeline. I added the || true to prevent this but it didn't help. lzop doesn't have the option to ignore the error. I don't care about the incorrectly compressed files.Is there any way I can work around this problem easily?
I want the pipeline to continue no matter what happens to lzop -dc command.
|
Ignore errors in linux pipelines
|
Here is an example:use Redis;
my $redis = Redis->new(server => '127.0.0.1:6379', reconnect => 60);
my %hval = ( 'foo', 1, 'bar', 2, 'foobar', 3 );
foreach my $key (keys %hval) {
my $ok = $redis->zadd("myzset", $hval{ $key }, $key, sub {
my ($reply, $error) = @_;
print "Returned $reply with error = [$error]\n" ;
});
}
print "Waiting replies ...\n";
$redis->wait_all_responses;Please note that:a wait_all_responses clause is required to put a synchronization point with the server.zadd requires 3 parameters (zset name, score, key) in that order
|
My attempt to Redis pipeline in perl, using Redis.pm, Is this correct approach? Snipped of code below:...
my $redis = Redis->new(server => '127.0.0.1:6379', reconnect => 60);
foreach my $key (keys %hval) {
my $ok = $redis->zadd($key, $hval{ $key }, &process);
}
sub process {
my ($reply, $error) = @_;
my $cr = sub {
my ($r, $e) = @_;
if ($e) {
warn Dumper('Redis pipelining crapped out', $e);
{
}
}Have you tried this before? I looked around but could not found any suitable example, Please let me know. I am using all required module and this code is for here only. Actual code is much complex? Thanks in advance.
|
Using Redis.pm pipeline in perl
|
It looks like you just want to add an element at the front of the array at each timestep, thus moving the already existant array elements one to the right. You could avoid doingO(n**2)ops like thisint& p_at_time(int index, int time_moment) {
return &p[time_moment-index+1];
}and at t=1: p_at_time(1,1) = I[1];at t=2: p_at_time(1,2) = I[2], (p_at_time(2,2) is already== I[1])at t=3: p_at_time(1,3) = I[3], (p_at_time(2,3) and p_at_time(3,3) have the values I[2]
and I[1] respectively)
|
I am trying to simulate 5 stage of pipeline. I have saved all the instruction into a struct.
( basically done with the stage of lixcal analysis )eg:ADD R1 R2 R3 // R1 = R2+ R3 ... struct pipe{ int pc, string instruction , int r1, int r2....}now ifp[i]is one of the stage of pipeline and (p[1]could bepc=pc+1;I[i]is instructions, (I[1]could beADD R1 R2 R3)what I want to do isat t=1 : p[1] = I[1]
at t=2 :p[2] = I[1], p[1] = I[2]
at t=3 :p[3] = I[1], p[2] = I[2], p[1] = I[3]
at t=4 :p[4] = I[1], p[3] = I[2], p[2] = I[3], p[1] = I[4]... and goes like that
I am using c++ so far. how could any one represent this cycle in c++ ?
|
how to simulate 5 stage of pipe line in c++?
|
You can't really convert "gst-launch syntax" to "python syntax".Either you create the same pipeline 'manually' (programmatically) using gst.element_factory_make() and friends, and link everything yourself.Or you just use something like:pipeline = gst.parse_launch ("v4l2src ! ..... ")You can give elements in your pipeline string names with e.g. v4l2src name=mysrc ! ... and then retrieve the element from the pipeline withsrc = pipeline.get_by_name ('mysrc')and then set properties on it like:src.set_property("location", filepath)
|
How do I implement the followinggst-launchcommand into a Python program using the PyGST module?gst-launch-0.10 v4l2src ! \
'video/x-raw-yuv,width=640,height=480,framerate=30/1' ! \
tee name=t_vid ! \
queue ! \
videoflip method=horizontal-flip ! \
xvimagesink sync=false \
t_vid. ! \
queue ! \
videorate ! \
'video/x-raw-yuv,framerate=30/1' \
! queue ! \
mux. \
alsasrc ! \
audio/x-raw-int,rate=48000,channels=2,depth=16 ! \
queue ! \
audioconvert ! \
queue ! \
mux. avimux name=mux ! \
filesink location=me_dancing_funny.avi
|
Convert gst-launch command to Python program
|
It appears that the UrlRouting takes place at step number 9 - PostResolveRequestCache.So it does in fact take place after AuthenticateRequest which is step number 4.This is thedocument for UrlRoutingModuleI looked up it's Init() method in reflector and that is where it subscribes to the PostResolveRequestCache Event.So I guess now I have to try and write some code that elegantly extracts the token from the url manually..
|
When does routing take place in the ASP.NET MVC pipeline?ASP.NET Application Life Cycle Overview for IIS 7.0Is it in step number 2 (Perform Url Mapping)?I intend to have a few routes that have an id"activate/{id}""forgotpassword/{id}"I would like to be able to access the id early on in the pipline in step 4 - AuthenticateRequest. So that I can pass an authentication token through the id part of the url to my custom authentication module.So can I access the id property in my custom authentication module or do I have to manually extract it from the request url?Thanks for your help,Duncan
|
When does routing take place in the pipeline?
|
Have you considered using Parallel.ForEach from the TPL?The Task Parallel Library (TPL) is a set of public types and APIs in .NET 4.
|
I want to create a parallel pipeline in C#. I have declaered an Interface named IOperation:public interface IOperation<Tin, Tout>
{
BlockingCollection<Tout> BlockingCollection(IEnumerable<Tin> input);
}Now i want to write a class, which executes multiple of these operations parallel.
I bagan with this:public class Pipeline : IPipeline
{
private List<IOperation<Object, Object>> operations = new List<IOperation<Object, Object>>();
private List<BlockingCollection<Object>> buffers = new List<BlockingCollection<Object>>();
public void Register(IOperation<Object, Object> operation)
{
operations.Add(operation);
}
public void Execute()
{
}
}But i don't find any solution to save the operations and the buffers between the operations, because they all have different generic types. Does anyone have an idea?
|
Parallel pipeline in C#
|
It can't be done withInvoke-Expression, however you can do it by converting the expression into ascript blockfirst then you can pass arguments as if it was a function.Do note:This still applies for the[scriptblock]::Create(...)approach.& ([scriptblock]::Create((irm get.scoop.sh))) -ScoopDir C:\foo\bar
|
I'm basing a small project off of the below one-liner, but I'm struggling to pass parameters into it on the same line.irm get.scoop.sh | iexThis script can take additional parameters, such as-ScoopDir.How does one pass-ScoopDir C:\foo\barinto the one-liner?
|
Is it possible to pass a variable into a piped powershell script?
|
CI variables should be available in gitlab-runner(machine or container) as environment variables, they are either predefined and populated by Gitlab like the list of predefined variableshere, or added by you in the settings of the repository or the gitlab groupSettings > CI/CD > Add Variable.After adding variables you can use the following syntax, you can test if the variable has the correct value by echoing it.variables:
GITGUARDIAN_API_KEY: "$GITGUARDIAN_API_KEY"
script:
- echo "$GITGUARDIAN_API_KEY"
- ggshield scan ci
|
I'm quite new to CI/CD and basically I'm trying to add this job to Gitlab CI/CD that will run through the repo looking for secret leaks. It requires some API key to be passed there. I was able to directly insert this key into .gitlab-ci.yml and it worked as it was supposed to - failing the job and showing that this happened due to this key being in that file.But I would like to have this API key to be stored in .env file that won't be pushed to a remote repo and to pull it somehow into .gitlab-ci.yml file from there.Here's minestages:
- scanning
gitguardian scan:
variables:
GITGUARDIAN_API_KEY: ${process.env.GITGUARDIAN_API_KEY}
image: gitguardian/ggshield:latest
stage: scanning
script: ggshield scan ciThe pipeline fails with this message:Error: Invalid API key.so I assume that the way I'm passing it into variables is wrong.
|
Can I pass a variable from .env file into .gitlab-ci.yml
|
Was described in the official docshttps://playwright.dev/docs/ci#gitlab-ciYou can set image per job like in my case:test e2e:
stage: test
**image: mcr.microsoft.com/playwright:v1.16.3-focal**
script:
- npx playwright install
- npm run test-e2e
|
I know my question is similar to thisone, but hope someone can help me to execute Playwright tests in the Gitlab pipeline.My .gitlab-ci.yaml insludes next lines:image: node:16.13.0
...
test e2e:
stage: test
script:
- npx playwright install
- npm run test-e2eCan I somehow set a proper docker image or OS thatis supported..?
|
Playwright failed in the Gitlab pipeline with browserType.launch: Host system is missing dependencies
|
From your YAML sample, it seems that there is a format issue. When you use if expression, there is no need to add condition field in YAML.You can try the following sample and check if it can work.stages:
- stage: A
jobs:
- job: test
steps:
- xxx
- stage: B
jobs:
- job: test
steps:
- xxx
- stage: C
${{ if eq(variables['Build.SourceBranchName'], 'main') }}:
dependsOn: A
${{ if eq(variables['Build.SourceBranchName'], 'dev') }}:
dependsOn: B
jobs:
- job: test
steps:
- xxx
|
I have three environments: dev, hml and qa.In my pipeline depending on the branch the stage has a condition to check whether it will run or not:- stage: Project_Deploy_DEV
condition: eq(variables['Build.SourceBranch'], 'refs/heads/dev')
dependsOn: Project_Build
- stage: Project_Deploy_HML
condition: eq(variables['Build.SourceBranch'], 'refs/heads/hml')
dependsOn: Project_BuildI'm doing the qa stage, and I'd like to put a condition, depending on the branch, the dependson parameter will change:- stage: Project_QA
condition:
${{ if eq(variables['Build.SourceBranchName'], 'dev') }}:
dependsOn: 'Project_Deploy_DEV'
${{ if eq(variables['Build.SourceBranchName'], 'hml') }}:
dependsOn: 'Project_Deploy_HML'However, the condition above is not working, would anyone know the best way to perform this condition?Thanks
|
Azure pipeline - Stage condition dependson
|
You can try to usestash:stage('Clone repository') {
agent {
label 'builder'
}
steps {
sh 'git clone ssh://[email protected]./repo.git'
script {
stash includes: 'repo/', name: 'myrepo'
}
}
}
stage('Build application') {
agent {
docker {
label 'builder'
image 'gcr.io/kaniko-project/executor:debug'
args '-u 0 --entrypoint=""'
}
}
steps {
script {
unstash 'myrepo'
}
sh '''#!/busybox/sh
/kaniko/executor -c `pwd` -f Dockerfile"
'''
}
|
Am I able somehow to copy data from one stage for usage on another?For example, I have one stage where I want to clone my repo, and on another run the Kaniko which will copy (on dockerfile) all data to container and build itHow to do this? BecauseStages are independent and I not able to operate via the same data on bothon Kaniko I not able to install the GIT to clone it there
Thanks in advanceExample of code :pipeline {
agent none
stages {
stage('Clone repository') {
agent {
label 'builder'
}
steps {
sh 'git clone ssh://[email protected]./repo.git'
sh 'cd repo'
}
}
stage('Build application') {
agent {
docker {
label 'builder'
image 'gcr.io/kaniko-project/executor:debug'
args '-u 0 --entrypoint=""'
}
}
steps {
sh '''#!/busybox/sh
/kaniko/executor -c `pwd` -f Dockerfile"
'''
}
}
}
}P.S. On dockerfile I using such asADD . /
|
Transfer data between stages on Jenkins Pipeline
|
Azure DevOps does not support attachments (other than the test result file) if you use the JUnit or xUnit report format. SeeCapture Screenshotsin the Azure docs.To upload screenshots as attachments with the Publish Test Results task, you need to generate an NUnit 3.0 report. You may want to take a look at this community-developed TestCafe reporter for NUnit 3.0:https://github.com/NickLargen/testcafe-reporter-nunit3According to the reporter's documentation, it can include videos and screenshots as test case attachments. Please note that since this plugin is supported by the community, we cannot promise that it will work as expected in all cases.
|
I have made some automated test (Testcafe) and put them together in one of my VS projects folder as shownhere.1.Script to run and take screenshots on failure:script: testcafe chrome **/Tests/**/* -S -s takeOnFails=true --reporter spec,xunit:report.xml
displayName: Run tests and save screenshots
continueOnError: true2.Script for publishing:- task: PublishTestResults@2
displayName: Publish test results
continueOnError: true
inputs:
testResultsFiles: '**/report.xml'
searchFolder: $(System.DefaultWorkingDirectory)
testResultsFormat: 'JUnit'
publishRunAttachments: true
testRunTitle: "Task Results"
failTaskOnFailedTests: false3.Script for copying files and build artifact:- task: CopyFiles@2
inputs:
sourceFolder: '$(Build.SourcesDirectory)'
contents: '**/?(*.png)'
targetFolder: '$(Build.ArtifactStagingDirectory)'
- task: PublishBuildArtifacts@1
inputs:
pathToPublish: '$(Build.ArtifactStagingDirectory)'
artifactName: Saved failure screenshotsIs there any way to add screenshots as attachment on test run created at step 2 likethisusing Yaml?Currently screenshots can be seen at artifact result from step 3 but I want them to be attached to test run.
|
How to attach Test failure screenshots to Azure Pipeline Test Run in YAML?
|
It seems like an issue with your ssh keys between the gitlab-runner host and gitlab. This topic might answer your question:Git-error-host-key-verification-failed-when-connecting-to-remote-repositoryBasically, log in your gitlab-runner host and check${HOME}/.ssh/known_hosts. You should see the current public key from your gitlab host. If not, you will need to remove it and update it.
|
I created a .gitlab-ci.yml file.
the project is already in the remote server.
I created gitlab-runner in my remote server and I chose the shell option.
my file .gitlab-ci.yml just makes the update for the project i.e. (we will do "git pull origin master"
and here is my .gitlab-ci.yml scriptstages:
- build
before_script:
- cd/home/devops/projects/my-project
building:
stage: build
script:
- git status
- sudo git pull origin masterwhen I run the pipeline i get this error.$git pull origin master.
Host key verification failed.
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists. "please how could we solve this problem?
I'm really stuck with this problemthank you so much
|
How to solve the problem Host key verification failed
|
for those who are looking for resolution,Here's what you'll need to do:
In the Top-level feature.yaml, Add the desired variable group name and pass in the parameter from the template which has the azure subscription name:variables:
- group: xxx
- template: deploy.yaml
parameters:
subName: $(_subscriptionName)And utilize that parameter in template deploy.yaml:- task: AzureKeyVault@1
inputs:
azureSubscription: ${{ parameters.subName}}
keyVaultName: xxx
secretsFilter: '*'Its a known issue currently that we can not use variable from variable group directly to azureSubscription input of AzureKeyVault task as perthisthread.Thi work around should work fine!
|
I'm using Yaml for Azure devops pipeline.
I am using a hierarchical style for that.I'm having one top level yaml: feature.yaml
which has following structure:trigger:
...
pool:
vmImage: ...
variables:
group: ...
stages:
- template: deploy.yaml
parameters:
subName: $(subscription) #This should be taken from Variable groupI have deploy.yaml as:stages:
- stage: deploy
jobs:
- job: deploy
steps:
- task: AzureKeyVault@1
inputs:
azureSubscription: $(paramaters.subName) #This should be resolved from parameter passed form feature.yaml
KeyVaultName: ...
SecretsFilter: '*'
RunAsPreJob: trueHowever, whenever I run this from Azure DevOps, I'm getting this error:There was a resource authorization issue: "The pipeline is not valid. Job deploy: Step AzureKeyVault input ConnectedServiceName references service connection $(paramaters.subName) which could not be found. The service connection does not exist or has not been authorized for use. For authorization details, refer to https://aka.ms/yamlauthz."It seems pipeline is not able to resolve value of azureSubscription name from variable group.Any suggestions?
|
Parameterize azureSubscription from variable group in yaml pipeline
|
setverbose=Truefor yourPipelineand you will see the time taken by each step of the pipeline, when fit method is applied.Here's how:model = Pipeline(steps=[
('data_transform', XYZ(p1=arg1, p2=arg2)),
('model', LogisticRegressionCV(solver='sag', multi_class='multinomial', class_weight='balanced', max_iter=5000))],
verbose=True)and when you do:model.fit(your_data)you will see similar output(I am pasting output of my pipeline here)
|
I have created a sklearn pipeline for my project with two component namely,data_transformandmodelas shown below.model = Pipeline([
('data_transform', XYZ(p1=arg1, p2=arg2)),
('model', LogisticRegressionCV(solver='sag', multi_class='multinomial', class_weight='balanced', max_iter=5000))])I call the fit method asmodel.fit(X_train, y_train). Since, my codetakes lot of time, I wanted to inspect the time taken by each component i.edata_transformandmodel. Is there any method from which I could find the time taken?
|
Profiling custom sklearn pipeline
|
Coming in a bit late on this one, but who knows, this might be helpful to someone running into similar issues.I cannot see your entire code, but the reason this might happencould bebecause you're trying to save your attachment in yourOneTimeSetuporOneTimeTeardownmethods. Quoting fromthe documentation:Within a OneTimeSetUp or OneTimeTearDown method, the context refers to the fixture as a whole...which basically means that usingTestContextmethods or properties is not allowed withinOneTimeSetuporOneTimeTeardown(even though NUnit/Visual Studio won't complain if you do so! Which adds to the confusion)So make sure you add the test's attachments from within yourTearDownor test case. AFAIK, there is no way to save attachments fromTestFixturecontext.
|
We want to attach screenshots to the Test Attachment in Azure pipeline. Currently, we use.NET 4.5.2Selenium.WebDriver 3.141Selenium.Chrome.WebDriverNunit 3.12.0Specflow.Nunit 2.4.0It is similar with the follwing example but we use NUnit rather than MSTesthttps://learn.microsoft.com/en-us/azure/devops/pipelines/test/collect-screenshots-and-video?view=azure-devops#collect-screenshots-logs-and-attachmentsWhen run the program in VS2017, the screenshots are accessible from the test report. Also, we can see the screenshots in the azure build output.Here is the code:string fileName = string.Format("Screenshot_" + DateTime.Now.ToString("dd-MM-yyyy-hhmm-ss") + ".jpg");
var artifactDirectory = Directory.GetCurrentDirectory();
ITakesScreenshot takesScreenshot = _driver as ITakesScreenshot;
if (takesScreenshot != null)
{
var screenshot = takesScreenshot.GetScreenshot();
string screenshotFilePath = Path.Combine(artifactDirectory, fileName);
screenshot.SaveAsFile(screenshotFilePath, ScreenshotImageFormat.Jpeg);
TestContext.AddTestAttachment(screenshotFilePath, "screenshot");
Console.WriteLine($"Screenshot: {new Uri(screenshotFilePath)}");
}Visual Studio Test step in Azure pipelineAfter the build runs, there is no attachmentAny help would be appreciated.
|
NUnit How to attach screenshot to test attachments in Azure Pipeline
|
Yes! You can do something like this:p = Pipeline([
PreprocessData(),
ColumnTransformer([
(0, model1(params)), # Model 1 will receive Column 0 of data
([1, 2], model2(params)), # Model 2 will receive Column 1 and 2 of data
], n_dimension=2, n_jobs=2),
(evaluate)
])The flow of data will be split into two.Then_jobs=2should create two threads. It may also be possible to pass a custom class for putting back the data together using thejoinerargument. We'll be releasing some changes soon, so this should work properly. For now, the pipeline works with 1 thread.For what regards yourCatBoostRegressormodel that is like sklearn but that doesn't come from sklearn, can you try to doSKLearnWrapper(model1(params))instead of simplymodel1(params)when declaring your model in the pipeline? Probably that Neuraxle didn't recognize the model as a scikit-learn model (which is a BaseEstimator object in scikit-learn) even if your object had the same API as scikit-learn's BaseEstimator. So you may need to use theSKLearnWrappermanually around your model or to code your own similar wrapper to adapt your class to Neuraxle.Related:https://stackoverflow.com/a/60302366/2476920EDIT:You can use theParallelQueuedFeatureUnionclass of Neuraxle. Example coming soon.Also see this parallel pipeline usage example:https://www.neuraxle.org/stable/examples/parallel/plot_streaming_pipeline.html#sphx-glr-examples-parallel-plot-streaming-pipeline-py
|
I want to create a simple pipeline withneuraxle(I know I can use other libraries but I want to useneuraxle) where I want to clean data, split it, train 2 models and compare them.I want my pipeline to do something like this:p = Pipeline([
PreprocessData(),
SplitData(),
(some magic to start the training of both models with the split of the previous step)
("model1", model1(params))
("model2", model2(params))
(evaluate)
])I don't know if it's even possible because I couldn't find anything in the documentation.Also I tried using other models than those fromsklearn(e.g.catboost,xgboost...) and I get the errorAttributeError: 'CatBoostRegressor' object has no attribute 'setup'I thought about creating a class for the models but I won't use the hyperparam search ofneuraxle
|
How to run 2 pipelines in parallel in scikit-learn or Neuraxle?
|
In fact, you should consider using another tool to upload file, for examplersyncwhich has a couple of useful features, such as compression of the data. It also only uploads files that were changed from the previous upload, which is gonna speed up the uploads as well. You can use thersync-deploypipe for example:script:
- pipe: atlassian/rsync-deploy:0.3.2
variables:
USER: 'ec2-user'
SERVER: '127.0.0.1'
REMOTE_PATH: '/var/www/build/'
LOCAL_PATH: 'build'
EXTRA_ARGS: '-z'Note the-zoption passed via the EXTRA_ARGS. This will enable the data compression when transferring files.
|
I'm using a bitbucket pipeline to deploy my react application.Right now my pipeline looks like this:image: node:10.15.3
pipelines:
default:
- step:
name: Build
script:
- npm cache clean --force
- rm -rf node_modules
- npm install
- CI=false npm run deploy-app
artifacts: # defining build/ as an artifact
- 'build-artifact/**'
- step:
name: Deploy
script:
- apt-get update
- apt-get install ncftp
- ncftpput -v -u "$USERNAME" -p "$PASSWORD" -R $SERVER build 'build-artifact/*'
- echo Finished uploading buildIt works really well like this, but the ftp upload takes about 8 minutes, which is way to long because with the free plan of Bitbucket I can only use the pipeline feature for 50 minutes per month.It seems like the uploads of every small file takes forever. That's why I thought that maybe uploading a single zip file will be way more performant.So my question is: Is it really faster? And how it is possible to ZIP the artifact, upload the zip to the server and unzip it there?Thanks for your help
|
Bitbucket pipeline - build Create-react-app, zip it, upload it via ftp and unzip it on server
|
I would recommend applyingOneHotEncoderto all categorical variables. Hence make that as a seperate pipeline.As it's a single step process for numerical columns, you can use theColumnTransformerdirectly.Try this!from sklearn.impute import SimpleImputer
from sklearn.compose import ColumnTransformer, make_column_transformer
from sklearn.preprocessing import OneHotEncoder
from sklearn.pipeline import Pipeline, make_pipeline
cat_preprocess = make_pipeline(SimpleImputer(strategy="most_frequent"), OneHotEncoder())
ct = make_column_transformer([
("num", SimpleImputer(strategy="median"), ["Pclass", "Age", "SibSp", "Parch", "Fare"]),
("str", cat_preprocess, ["Cabin", "Sex"]),
])
pipeline = Pipeline([('preprocess', ct)])
|
I have trouble understanding how pipelines are supposed to work in Sklearn. Following is an example using the titanic dataset.data = pd.read_csv('datasets/train.csv')
cat_attribs = ["Embarked", "Cabin", "Ticket", "Name"]
num_pipeline = Pipeline([
('imputer', SimpleImputer(strategy="median")),
])
str_pipeline = Pipeline([
('imputer', SimpleImputer(strategy="most_frequent")),
])
full_pipeline = ColumnTransformer([
("num", num_pipeline, ["Pclass", "Age", "SibSp", "Parch", "Fare"]),
("str", str_pipeline, ["Cabin", "Sex"]),
("cat", OneHotEncoder(), ["Cabin"]),
])
full_pipeline.fit_transform(data)I'd expect this to fill all missingNaNvalues (both in numeric and string) attributes, and then finally transform theCabinattribute into a numerical one.Instead the code ends up with the following error:ValueError: Input contains NaN. If I remove the line calling the
OneHotEncoder and printing the transformed array, there is no NaN
value.Hence I wonder. How am I supposed to callOneHotEncoderin this situation.
|
OneHotEncoder raising NaN issue after SimpleImputer has been called already
|
Currently SparkSubmitOperator/SparkSubmitHook aren't designed to return the job stats to XCom. You can easily update the operator to accommodate your needs:from airflow.contrib.operators.spark_submit_operator import SparkSubmitOperator
class SparkSubmitOperatorXCom(SparkSubmitOperator):
def execute(self, context):
super().execute(context)
return self._hook._driver_statusThen you can initialise the operator to send the return of theexecutemethod to XCom:task1 = SparkSubmitOperatorXCom(
do_xcom_push=True,
...
)Note: In this case we are accessing a private property. This is the only way the SparkSubmitHook offers the driver status. For more complex job stats you will have to implement your own solution as the hook doesn't seem flexible enough to provide everything for you.
|
In my airflow spark jobs, I have a requirement to pass the spark job stats to other tasks in the workflow. How to push value from SparkSubmitOperator to xcom?task1 = SparkSubmitOperator(
task_id='spark_task',
conn_id='spark_default',
java_class='com.example',
application='example.jar',
name='spark-job',
verbose=True,
application_args=["10"],
conf={'master':'yarn'},
dag=dag,
)
#pass value from task1 to task 2 via xcom
def somefunc(**kwargs):
#pull value from task1
kwargs["ti"].xcom_pull(task_ids='spark_task')
task2 = PythonOperator(task_id='task2',
python_callable=somefunc,
provide_context=True,
dag=dag)
|
Airflow SparkSubmitOperator push value to xcom
|
This is one workaround I came up with:(Tested in Powershell v5)function Example {
[CmdletBinding()]
param(
[Parameter(ValueFromPipeline = $true)]
[byte]$Value
)
begin {
$stream = New-Object System.IO.MemoryStream
}
process {
try {
$dispose = $true
$stream.WriteByte($value)
# indicate that the process block finished normally
$dispose = $false
}
finally {
# detect stopped pipeline
if ($dispose) {
if ($stream) {
$stream.Dispose()
$stream = $null
}
}
}
}
end {
# regular dispose
if ($stream) {
$stream.Dispose()
}
}
}Apparently there is a request onGithubto introduce a newDisposeblock or similar, which would be a great and much needed improvement IMHO.
|
I was wondering how to properly dispose objects in scripted cmdlets when the pipeline was stopped.Usually I would initialize the disposable object in thebeginblock, work with it in theprocessblock, and finally dispose it in theendblock:function Example {
[CmdletBinding()]
param(
[Parameter(ValueFromPipeline = $true)]
[byte]$Value
)
begin {
$stream = New-Object System.IO.MemoryStream
}
process {
$stream.WriteByte($value)
}
end {
$stream.Dispose()
}
}But theendblock is not executed when the pipeline is stopped (withCtrl+Cfor example). And I cannot dispose the object in theprocessblock because I need it for the next step in the pipeline.I posted one possible approach as an answer. But is there any more robust solution?(Note: This is about scripted cmdlets only, not compiled.)
|
PowerShell: Dispose objects when pipeline stopped
|
First of all: that's an excellent question, I wonder why it hasn't been discussed widely until nowI can think of two possible approachesFusingOperators: As pointed out by@Kris,CombiningOperatorstogetherappears to be the most obvious workaroundSeparateTop-LevelDAGs: Read belowSeparate Top-Level DAGs approachGivenSay you have tasks A & BA is upstream to BYou want execution to resume (retry) from A if B fails(Possibile) Idea:If your'e feeling adventurousPut tasks A & B in separatetop-levelDAGs, say DAG-A & DAG-BAt the end of DAG-A, trigger DAG-B usingTriggerDagRunOperatorIn all likelihood, you will also have to use anExternalTaskSensorafterTriggerDagRunOperatorIn DAG-B, put aBranchPythonOperatorafter Task-B withtrigger_rule=all_doneThisBranchPythonOperatorshould branch out to anotherTriggerDagRunOperatorthat then invokes DAG-A (again!)Useful referencesFusing Operators TogetherWiring Top-Level DAGs togetherEDIT-1Here's a much simpler way that can achieve similar behaviourHow can you re-run upstream task if a downstream task fails in Airflow (using Sub Dags)
|
With Airflow, is it possible to restart an upstream task if a downstream task fails? This seems to be against the "Acyclic" part of the term DAG. I would think this is a common problem though.BackgroundI'm looking into using Airflow to manage a data processing workflow that has been managed manually.There is a task that will fail if a parameter x is set too high, but increasing the parameter value gives better quality results. We have not found a way to calculate a safe but maximally high parameter x. The process by hand has been to restart the job if failed with a lower parameter until it works.The workflow looks something like this:Task A - Gather the raw dataTask B - Generate config file for jobTask C - Modify config file parameter xTask D - Run the data manipulation JobTask E - Process Job resultsTask F - Generate reportsIssueIf task D fails because of parameter x being too high, I want to rerun task C and task D. This doesn't seem to be supported. I would really appreciate some guidance on how to handle this.
|
Can a failed Airflow DAG Task Retry with changed parameter
|
There is nothing* wrongwith usingPipelineand invokingfitmethod. If a stage is aTransfomer, andPipelineModelis**,fitworks like identity.You can checkrelevant Python:if isinstance(stage, Transformer):
transformers.append(stage)
dataset = stage.transform(dataset)andScala code:This means that fitting process will only validate the schema and create a newPipelineModelobject.case t: Transformer =>
t* The only possible concern is presence of non-lazyTransformers, though, with exception to deprecatedOneHotEncoder, Spark core API doesn't provide such.** In Python:from pyspark.ml import Transformer, PipelineModel
issubclass(PipelineModel, Transformer)TrueIn Scalaimport scala.reflect.runtime.universe.typeOf
import org.apache.spark.ml._
typeOf[PipelineModel] <:< typeOf[Transformer]Boolean = true
|
I would like to concatenate several trained Pipelines to one, which is similar to
"Spark add new fitted stage to a exitsting PipelineModel without fitting again" however the solution as below is for PySpark.> pipe_model_new = PipelineModel(stages = [pipe_model , pipe_model2])
> final_df = pipe_model_new.transform(df1)In Apache Spark 2.0 "PipelineModel"'s constructor is marked as private, hence it can not be called outside. While in "Pipeline" class, only "fit" method creates "PipelineModel"val pipelineModel = new PipelineModel("randomUID", trainedStages)
val df_final_full = pipelineModel.transform(df)Error:(266, 26) constructor PipelineModel in class PipelineModel cannot be accessed in class Preprocessor
val pipelineModel = new PipelineModel("randomUID", trainedStages)
|
Add new fitted stage to a exitsting PipelineModel without fitting again
|
You are missing that when calling tosend, the corrutine will go till the nextyieldand that one will be called, so, if you do :c=coroutine()
c.__next__()
print(c.send([1,2,3,4,5]))
for val in c:
print(val)You will see how the missing value is printed (as it is yielded in thesendcall)Here you have thelive exampleFor the behaviour you desire you can add an extrayieldstatement to the corrutine:def coroutine():
print('Starting coroutine')
value = (yield)
yield
for i in value:
yield i
|
So I understand howgeneratorsandcoroutineswork. Broadly speaking,generatorsproduce data andcoroutinesconsume data. Now, what I am trying to do is combine both these features.I have defined acoroutinethat receives alistas an input and then tries to**yield**items from the list one at a time, like ageneratorwould do.Here is my code -def coroutine():
print('Starting coroutine')
value = (yield)
for i in value:
yield i
c=coroutine()
c.__next__()
c.send([1,2,3,4,5])
for val in c:
print(val)The problem is, the first list item is being lost. Thevalue 1is not being returned from the coroutine.Based on my understanding, the flow should have been as follows.c=coroutine()----> Declares thecoroutinewithout starting it.c.__next__()----> This starts thecoroutineand it advances to the line -value = (yield)and stops there.c.send([1,2,3,4,5])----> This passes the newlistto the waiting coroutine i.evalue = (yield). The coroutine now proceeds to the nextyieldstatement inside the for loop.Thefor loopin the main program is supposed to receive each items of the list that it initially passed. But this does not happen.Can you please explain why ? The reason I am trying to do this is to generate apipeline. Each component will receive items, modify it and then yield it to the next coroutine in the pipeline.Please help.EDIT --------------------The output is as follows -Starting coroutine
2
3
4
5
|
Can a coroutine yield values in Python?
|
For aFeatureUnionto output aDataFrameyou can use thePandasFeatureUnionfrom thisblog post. Also see thegist.
|
So I currently have a Pipeline that has a lot of customer transformers:p = Pipeline([
("GetTimeFromDate",TimeTransformer("Date")), #Custom Transformer that adds ["time"] column
("GetZipFromAddress",ZipTransformer("Address")), #Custom Transformer that adds ["zip"] column
("GroupByTimeandZip",GroupByTransformer(["time","zip"]) #Custom Transformer that adds onehot columns
])Each transformer takes in a pandas dataframe and returns the same dataframe with one or more new columns. It actually works quite well, but how can I run the "GetTimeFromDate" and the "GetZipFromAddress" steps in parallel?I would like to use FeatureUnion:f = FeatureUnion([
("GetTimeFromDate",TimeTransformer("Date")), #Custom Transformer that adds ["time"] column
("GetZipFromAddress",ZipTransformer("Address")), #Custom Transformer that adds ["zip"] column])
])
p = Pipeline([
("FeatureUnionStep",f),
("GroupByTimeandZip",GroupByTransformer(["time","zip"]) #Custom Transformer that adds onehot columns
])But the problem is that FeatureUnion returns a numpy.ndarray, but the "GroupByTimeandZip" step needs a dataframe.Is there a way I can get FeatureUnion to return a pandas dataframe?
|
How to make FeatureUnion return Dataframe
|
Assuming you want to catch whatever went wrong, store it and proceed with calculation,tryCatchwould be a natural way to handle this.In the following chunk I'm catching an error and outputting the state. Clearly you can implement the same logic to store the parameters somewhere as "current valid state".invisible(mapply(x = 1:10, y = rnorm(10), FUN = function(x, y) {
out <- tryCatch(
if (x == 3) {
simpleError("o ow, something went wrong")
} else {
x * y
})
if (any(class(out) %in% "simpleError")) {
message(sprintf("Something went wrong for x: %s, y: %s", x, y))
} else {
out
}
}))
|
I'm running some scientific experiments which involve quite a lot of trial and error. What I basically do is pass some object along a chain of numerical methods with a bunch of parameters on each stage like so:x <- define_object(param1, param2) %>%
run_method(param3, param4) %>%
process_results(param5) %>%
save_plot(param6)When everything goes smooth, I'm happy. However, each stage may fail (e.g. I modified the source and left a syntax error or the method is not compatible with the parameter set I provide), so the whole chain stops and the object is lost regardless of the steps that succeeded. In such cases I'd like to keep the object in its' latest valid state (I call that a "safe state"). I need that because each stage can take quite some time to execute.Well, naturally, the trivial answer would be to ditch the piping paradigm and go with traditional approach:x <- define_object(param1, param2)
x <- run_method(x, param3, param4)
x <- process_results(x, param5)
x <- save_plot(x, param6)so I can resume the chain as soon as I fix it.But maybe there's an option to still have pipes and keep the safe state? I'm not a number one fan of pipes, but the flow I have for this task asks for it, though the inconvenience I described is a deal-breaker.
|
Capturing object's "safe state" within pipe chain
|
You need to enter in the username and password for the remote url. I have tested the following in the past and it worked for me:git remote set-url origin https://$USERNAME:[email protected]/git/user/projectwhere USERNAME and PASSWORD are environment properties on the stage. I recommend setting the password as a secure property so the value is not displayed in the logs. You should be able to push your tags after that.
|
I have a Build & Deploy Pipeline in Bluemix, I would like to tag git if the stage has been successfully deployed.
Currently, I had added a build step after a deploy step into my deploy stage with a shell script that look like:# put the git tag
echo 'Put tag build_$BUILD_NUMBER on git'
git tag build_$BUILD_NUMBER
git push --tagsThe returned error is:fatal: could not read Username for 'https://hub.jazz.net': No such device or address
Build step 'Execute shell' marked build as failureBut it seems that I have no .gitcredentials file to push without need to add authentication information.How can I push a tag on git into my delivery pipeline??
|
Bluemix: How I can configure a delivery pipeline stage build to tag git?
|
Without forwarding, the load word instruction will have register 10 updated after the 1st half of the clock cycle in the write back stage. The store word instruction will need to read that value in register 10 in the second half of the clock cycle in the decode stage, producing the following 2 stalls in the decode stage:F D E M W
F D D D E M W
|
I am looking at the number of stalls in the following MIPS code with and without forwarding. I am trying to get a better understanding of when the data is needed in the datapath.lw $10, 0($4)
sw $10, 24($5)With forwarding, I get the following with the understanding that the value going into register 10 from the load word instruction is available after the memory stage, and that value is needed by the store word instruction before its memory stage. Hence, there are zero stalls.F D E M W
F D E M WIf there is no forwarding, register 10 will not have the correct value from the load word instruction until it is written in the first half of the clock cycle in the write back stage.Is it correct to say that the store word instruction needs the correct value of register 10 in the second half of the clock cycle in the decode stage, producing the following two stalls:F D E M W
F F F D E M WOr is it that the store word instruction needs it in the execute stage producing this sequence of two stalls:F D E M W
F D D D E M WI guess I'd like a way of phrasing this in my head to better my understanding.
|
MIPS Pipeline with and without Forwarding
|
You could use sed.$ sed 's/.*\(0x[0-9a-f][0-9a-f][0-9a-f][0-9a-f]\).*/\1/' file
0x8001
channel 1: 123
channel 2: 234
channel 3: 345
0x8002
channel 1: 456
channel 2: 567
channel 3: 678I assumed that there is only one hexadecimal string like0x8001present in a line.
|
So there is an input file res.txt like thisprocessing file 0x8001.values
channel 1: 123
channel 2: 234
channel 3: 345
processing file 0x8002.values
channel 1: 456
channel 2: 567
channel 3: 678I have a pattern like this0x[0-9a-f][0-9a-f][0-9a-f][0-9a-f]Using, for example,grep -o "0x[0-9a-f][0-9a-f][0-9a-f][0-9a-f]" res.txtI can get the list of my filenames (without .values) which is fine!0x8001
0x8002But want still all the other lines, which did not match the pattern to stay where they are like this:0x8001
channel 1: 123
channel 2: 234
channel 3: 345
0x8002
channel 1: 456
channel 2: 567
channel 3: 678I am quite familiar with sed but I could not find a way to do what I want.
|
Find lines by pattern, leave only pattern but leave unmatched lines as they are
|
A common assumption is that you canwrite in the first halfof a cycle, andread in the second halfof a cycle.Lets say thanI1is your first instruction andI2your second instruction, andI2is using a register thatI1is modifying.Only 1 memory port.
This means that you cannotread or writememory at the same time in two different stages of the pipelines.
For instance, ifI1is at theMEMstage, another instructioncannotbe at theIFstage at the same time, because both require memory access.No data forwarding.Data forwarding reflects to the fact that at the end ofEXstage forI1, you forward the data to theIDcycle ofI2.
Consequently,no forwardingmeans that the pipeline has to wait for theWBstage of theI1to go toIDstage ofI2. With the asumption, you can go toIDstage at the same time as theWBstage of the previous instruction, becauseWBwill write to memory during the first half of the cycle, andIDwill read from memory during the second half of the cycle.Branch stalls until end of EX stage.This is a common asumption, that doesn't usebranch prediction techniques. It simply states that an instruction after a branch has to wait until theend of EX stageto startIDstage. Recall that the address of the next instruction to be executed is known only at theEXstage of the branch instruction.
|
I'm unsure about how the following properties affect pipeline execution for a 5 stage MIPS design (IF, ID, EX, MEM, WB). I just need some clearing up.only 1 memory portno data fowarding.Branch stalls until end of * stageDoes the 1 memory port mean we cannot fetch or write when we read/write to mem (i.e. MEM stage on lw,sw you can't enter IF or another MEM)?
With no forwarding does this means an instruction won't enter the ID stage until after or on the WB stage for the previous instruction it depends on?
Idk what the branch stall means
|
Organizing pipeline in MIPS
|
The problem that you are running into is that, underbash(other may shells differ), a pipeline terminates only afterallcommands in the pipeline are finished.One solution is to use:while [ 1 ]; do
cat pattern_file || break
done | socat - /dev/ttyS0Ifsocatterminates, then thecatcommand will fail when it runs. However, the mere failure of a command in a loop does not cause the loop to terminate. By adding thebreakcommand, we can assure that, if thecatfails, then the loop will terminate.Another solution is to avoid pipelines altogether and useprocess substitution:socat - /dev/ttyS0 < <(while true; do cat pattern_file; done)DocumentationThe problematic pipeline behavior is documented inman bash:The shell waits for all commands in the pipeline to terminate before returning a value.
|
The simple script:while [ 1 ]; do
cat pattern_file
done | socat - /dev/ttyS0It makes a stream, looped pattern included in the file and sends it into the serial port via socat. The script also allows to read data back from the serial port.
Unfortunately when socat ends (eg. killed) the loop hangs forever without any error message.I want to avoid:ptsmore scripts than onereopening the serial port every pattern_file
|
Bash doesn't end the loop despite pipeline is broken
|
Yes, you can. The commands are contained in aCommandCollection Objectwhich inherit off of collection. To remove the second command and add the third you could do the following:$PS.Commands.Commands.RemoveAt(1)
$PS.AddCommand("Command-Three").AddParameter("Param3")Second question: If I understand you correctly, you are wondering if when after you use the Invoke method, the commands are cleared from the CommandCollection and it is left empty? No, that is not the case:PS> $PS = [Powershell]::Create()
PS> $PS.AddCommand("New-Item").AddParameter("Name", "File.txt").AddParameter("Type","File")
PS> $PS.AddCommand("Remove-Item").AddParameter("Path", "File.txt")
PS> $PS.Invoke()
PS> $PS.Commands
Commands
--------
{New-Item, Remove-Item}If you would like to clear them, run the Clear method on Commands propertyPS> $PS.Commands.Clear()
PS> $PS.Commands
Commands
--------
{}
|
If I understood correctly. UsingAddCommand()adds a command to a pipeline. That means for example$PS = [PowerShell]::Create()
$PS.AddCommand("Command-One").AddParameter("Param1");
$PS.AddCommand("Command-Two").AddParameter("Param2");
$PS.Invoke();will invoke following script:PS> Command-One -Param1 | Command-Two -Param2Now, can I change this script toPS> Command-One -Param1 | Command-Three -Param3without having to reinitialize$PS? I didn't find method likeDelCommandthat would remove last command from a pipeline.Second question is whether successful execution ofInvoke()cleans out all pending command leaving pipeline empty.
|
Remove Command from powershell pipeline
|
+50a) Build pipeline does not have the functionality to show promotion stars in it.b) The way you have passed the parameters are correct. It should work when you use the ${iso.name} on build steps.But if you use this in a 'Execute batch command step' It will not work. You will have to use %iso.name% on a batch command.c)Builds that are triggered by promotion is not visible because it's a bug in the build pipeline plugin.https://issues.jenkins-ci.org/browse/JENKINS-22203
|
When I set up aProject Awhich triggersProject B(with Parameters) andProject Btriggers nowProject C1andC2the whole chain (with parameters) shows up neatly in the Build Pipeline view of Jenkins:However I have added aPromoted Buildsetting onProject Bwhich tracks completion ofC1andC2.There are now 3 problems with this:a) A minor thing, but I really wondered if I am doing something wrong as it seems to be an essential functionality to me: the promotion (stars) are not visible in the Build Pipeline view.b) What is worse, I set up the promotion action (ofB) to trigger a new JobD. This works, however I cannot pass the build parameters of JobBalong (D receives unexpanded value${iso.name}).c) TheProject DJob triggered by the promotion runs and shows that it was triggered byB, I also see in the Promotion log ofBthat it triggered it. But it does not show in the Build Pipeline View, is there a way to get it added (it generally does not show up as a downstream build). Would it help to actually share a fingerprinted artifact?
|
Jenkins - Promoted Builds in Pipeline, configuring parameters in promotion action
|
It's not currently possible (other than ugly workarounds where you construct the pipe manually viaset var (echo $initialinput | firstcommand); and set var (echo $var | secondcommand); and...jThis is tracked asfish bug #2039.
|
In fish:if false | true | true
echo "Fish thinks OK because of last status"
else
# But I...
echo "Need the entire pipeline to be true"
endBash has $PIPESTATUS.
How does one test the integrity of a pipeline in Fish?To clarify...
I'm using true and false in the example pipeline as an example
of a pipeline which last component succeeds.
It's not meant to be a boolean statement.
Normally, if any component of a pipeline fails,
one would consider the pipeline as having failed.
|
Fish Pipeline Status
|
Cached copy has a purpose, when git pulls changes from the remote the cached copy is used to only pull what is missing, no more. Then this cached copy is clone to a new revision using git, and when git clones repositories on the same disk it creates hard links - so your .git/objects are not duplicated, they are the same files shared across all your "copies". I suggest you leave this directory untouched, it is actually important.
|
We using Opscode Chef in our pipeline and we notice that the deployment (seehttp://docs.opscode.com/resource_deploy.html) creates a complete copy of our sourcecode to /shared/cached-copyIt already has nearly thousand complete versions of it (not just deltas!) in the .git/object folder of it, so filesize growth and growth.Is there any way to get this cleaned up or even completely prevented? We don't need it at all.For sure I could write something to delete the directory after each deployment but is there a good way to handle this? Thanks.
|
Opscode Chef - way to cleanup /shared/cached-copy
|
I would expect that your Get-Node cmdlet would return a fully populated object graph. Here's an similar example of this using XML:$xml = [xml]@'
<?xml version="1.0" encoding="ISO-8859-1"?>
<bookstore>
<book>
<title lang="eng">Harry Potter</title>
<price>29.99</price>
</book>
<book>
<title lang="eng">Learning XML</title>
<price>39.95</price>
</book>
</bookstore>
'@
$xml.SelectNodes('//*') | Where {$_.ParentNode.Name -eq 'book'}In answer to your question about accessing another object in the pipeline, it isn't uncommon to create intermediate variables as part of the pipeline that can be referenced later:Get-Process | Foreach {$processName = $_.Name; $_.Modules} |
Foreach {"$processName loaded $($_.ModuleName)"}In this scenario I stash the System.Diagnostics.Process object's name before propagating a completely different type down the pipeline ie System.Diagnostic.ProcessModule. Then I can combine the stashed process name with the module's name to produce the output I want.The above approach is good for pedagogical purposes but isn't really canonical PowerShell. This would be a more typical way to do this in PowerShell:Get-Process | Select Name -Exp Modules | Foreach {"$($_.Name) loaded $($_.ModuleName)"}In this scenario we've taken the Process's name and projected it into each ProcessModule object.
Note that some of the processes will generate an error when you try to enumerate their modules collection.
|
Suppose I have objects that have a parent-child relationship like this:public class Node
{
public string Name { get; set; }
public string Type { get; set; }
public Node Parent { get; set; }
}Now, I would like to create a cmdlet that supports syntax like this:Get-Node | where {$_.Type -eq "SomeType" -and $_.Parent.Name -eq "SomeName" }Here, the Parent property needs to somehow reference another object in the pipeline. Is something like this even possible in PowerShell? If not, what are the alternatives?[Edit]
If I use the class above like this:var root = new Node
{
Name = "root",
Type = "root",
Parent = null
};
var nodeA = new Node
{
Name = "A",
Type = "node",
Parent = root
}
WriteObject(root);
WriteObject(nodeA);And then load the module and try this command:Get-MyNode | where {$_.Parent.Name = "root"}I get this error:Property 'Name' cannot be found on this object; make sure it exists and is settable.
At line:1 char:31
+ Get-MyNode | where {$_.Parent. <<<< Name = "root"}
+ CategoryInfo : InvalidOperation: (Name:String) [], RuntimeException
+ FullyQualifiedErrorId : PropertyNotFoundI would like the Parent property to reference another object in the pipeline like a real Node object.[Edit] This error was caused by the public keyword missing from the class definition. Adding the keyword fixed the issue and made the example work.
|
Navigating parent-child relation in PowerShell cmdlet
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.