Response
stringlengths
8
2k
Instruction
stringlengths
18
2k
Prompt
stringlengths
14
160
Change this:$Groups = $GroupList.split(","); $Groups = (Get-AdGroup -identity ***each group member of $Groups*** | select name -expandproperty name)into this:$Groups = $GroupList.Split(",") | Get-ADGroupMember | Select-Object -ExpandProperty NameBetter yet, define$GroupListas an array right away (as@Chardsuggested), so you don't need to split a string in the first place:$GroupList = 'My_Group_Name1', 'My_Group_Name2', 'My_Group_Name3', 'My_Group_Name4' $Groups = $GroupList | Get-ADGroupMember | Select-Object -Expand Name
I am trying to get the PowerShell syntax right to create a list of AD group names which I can then go on to loop through and process. The working code foronenamed group is :$Groups = (Get-AdGroup -identity "My_Group_Name" | select name -expandproperty name)They split this AD group into 8 sub-groups, so requirements now dictate that`$Groups` is a list of 8 known groups. So I am aiming for something like: $GroupList = "My_Group_Name1,My_Group_Name2,My_Group_Name3,My_Group_Name4"; $Groups = $GroupList.split(","); $Groups = (Get-AdGroup -identity ***each group member of $Groups*** | select name -expandproperty name)It's the bit that does***each group member of $Groups***I am struggling with.
Cycling through a list of values for Group Names to Process
Bluemix Local includes a private syndicated catalog that displays the local services that are available exclusively to you. It also includes additional services that are made available to you to use from Bluemix Public. The syndicated catalog provides the function to create hybrid applications that consist of public and private services.Bluemix Local comes with all included Bluemix runtimes and a set of services and components available. Take a look at the Table 1. Local Services inBluemix Local Docs.As you can see, for example the Auto-Scaling service is already included in the local environment. However you have the option to decide which public services meet the requirements for your business based on your data privacy and security criteria.
Does the Bluemix local provide devops services like Delivery Pipeline and Active Deploy?
Bluemix: Are devops services available on Bluemix local?
Both DataflowPipelineRunners return the PipelineResult that allows you to query the current status of the pipeline. The DataflowPipelineRunner returns a PipelineResult immediately upon job submission, but the BlockingDataflowPipelineRunner doesn't return it until the job has completed.In addition, the BlockingDataflowPipeline runner throws an exception if the job does not complete successfully -- since you've specified a blocking run(), we assume you want to know if something goes wrong. So if you've hard coded the Blocking Runner, then relying on an exception is an easy way to handle failure.Note that the code snippet you wrote uses the more general PipelineResult option, but won't work with the non-blocking runner, since that will return a result while the job is still running.
My code for handling my pipeline result looks like this (some snipped for brevity's sake):PipelineResult result = pipeline.run(); switch (result.getState()) { case DONE: { handleDone(); break; } case FAILED: { handleFailed(); break; } case CANCELLED: { handleCancelled(); break; } case UNKNOWN: case RUNNING: case STOPPED: { handleUnknownRunningStopped(); break; } default: { assert false; throw new IllegalStateException(); } }However, I've noticed that instead of returning a value of the enumPipelineResult.Statefor e.g.FAILEDorCANCELLED, an exception is thrown:For a failed job, aDataflowJobExecutionExceptionis thrownFor a cancelled job, aDataflowJobCancelledExceptionis thrownWhat is the correct way (programmatically) to handle the result of a pipeline?
What is the correct way to get the result of Pipeline?
Your code has two syntax errors and a logic error.The first syntax error is that the function's return type should be a collection not a row type, soreturn t2_tab pipelinedThe second syntax error is that you need to include the type when instantiating an object, and the number of arguments must match the signature of the type. So the outer loop assignment should be:l_row2(i) := t2_row(i, 'T2', l_row1);The logic error is that we don't need to maintain a collection variable for the output. We just need a row variable.Also the indexed counts seem a bit confused, so my code may differ from your intention.create or replace function fn (r in number) return t2_tab pipelined is l_tab1 t1_tab; l_row2 t2_row; begin for i in 1..r loop l_tab1 := new t1_tab(); l_tab1.extend(r); for j in 1..r loop l_tab1(j) := t1_row(j*i, 'a2 ' || j); end loop; l_row2 := t2_row(i, 'T2', l_tab1); PIPE ROW (l_row2); end loop; return; end; /Here is the run:SQL> select * from table(fn(3)); B1 B2 B3(A1, A2) ----- --- --------------------------------------------------------------- 1 T2 T1_TAB(T1_ROW(1, 'a2 1'), T1_ROW(2, 'a2 2'), T1_ROW(3, 'a2 3')) 2 T2 T1_TAB(T1_ROW(2, 'a2 1'), T1_ROW(4, 'a2 2'), T1_ROW(6, 'a2 3')) 3 T2 T1_TAB(T1_ROW(3, 'a2 1'), T1_ROW(6, 'a2 2'), T1_ROW(9, 'a2 3')) SQL>
I am just trying to achieve pipe row nested type. There are tons of examples around but none that I am able to apply.My types are:create type t1_row as object ( a1 number, a2 varchar2(10) ); create type t1_tab as table of t1_row; create type t2_row as object ( b1 number, b2 varchar2(10), b3 t1_tab ); create type t2_tab as table of t2_row;I've tried to create a function in so many ways, but none of them are able to compile successfully.One example:create or replace function fn (r in number) return t2_row pipelined is l_row1 t1_tab; l_row2 t2_tab; begin for i in 1..r loop for j in 1..r loop l_row1(j).a1 := j; l_row1(j).a2 := 'a2 ' || j; end loop; l_row2(i) := (i,l_row1); PIPE ROW (l_row2); end loop; return; end;This code produces the following errors:[Error] PLS-00630 (1: 12): PLS-00630: pipelined functions must have a supported collection return type[Error] PLS-00382 (10: 22): PLS-00382: expression is of wrong typeAny help advice or any similar example can be useful.Version: Oracle 11g Release 11.2.0.1.0
Pipe nested object type
In your previous question you were using Patterson's book, so let me borrow one of its diagrams:The important bit here is the hazard detection unit, which is causing the bubbles. If you've read the accompanying text, you know that the method by which it does that isNOP out the control signals,pause IF(keep the IF/ID buffer fixed and don't advance PC)Which means that your pipeline diagram cannot happen like that. There will not be a new instruction entering every cycle. Also consider that if you had different code, you could have arranged for hardware hazards to happen, as Jester described. So that's obviously bad, and the solution is stalling IF.This is what would happen:I1: IF ID EX MEM WB I2: IF ID EX MEM WB I3: IF ID -- -- -- I3: IF ID -- -- -- I3: IF ID EX MEM WB I4: IF ID -- -- --etc.
With the standard 5-stage pipeline for the MIPS architecture and assuming some instructions depend on each other, how the pipeline bubbles are inserted to the following assembly code?I1: lw $1, 0($0) I2: lw $2, 4($0) I3: add $3, $1, $2 ; I1 & I2 -> I3 I4: sw $3, 12($0) ; I3 -> I4 I5: lw $4, 8($0) I6: add $5, $1, $4 ; I1 & I5 -> I6 I7: sw $5, 16($0) ; I6 -> I7In the first place that we insert a bubble, weI1: IF ID EX MEM WB I2: IF ID EX MEM I3: IF ID -- I4: IF IDAs you can see, while I3 is stalled, I4 can proceed for decoding. Isn't that right? Next,I1: IF ID EX MEM WB I2: IF ID EX MEM WB I3: IF ID -- EX MEM WB I4: IF ID -- -- EX MEM WB I5: IF ID EX MEM WB I6: IF ID -- EX MEM WB I7: IF ID -- -- EX MEM WBI think that is possible with the standard pipeline of MIPS, but some say that whenever a bubble is inserted the whole pipeline is stalled. How that can be figured out?
understanding MIPS assembly with pipelining
Individual itemsare processed sequentially, but the processing of the entire sequence of items happens in parallel (mostly). As soon as an item is finished being processed by a given task, it is sent to the next task, which starts processing it right away. Without pipelines, the first task would have to process the entire sequence of values before the second task could start. This image from the MSDN article should help:
OnMSDNit says that each task is processed concurrently and each subsequent task depends on the output from the previous. However how can they occur concurrently if the processing of the subsequent task needs the output from the previous in order to even process something? Doesn't this imply that a task needs for the task that precedes it to complete it's execution before it can begin? (which doesn't sound very concurrent or parallel to me)For example in their diagram they show the processing of a string with these stepsRead stringCorrect casecreate sentenceswrite outputHow can i work on creating sentences before I read all the strings with corrected case?The assembly line analogy also doesn't sound very parallel to me since if one stage in the assembly breaks then how can the partially constructed product move to the next state for further assembly.
How can pipeline tasks run concurrently if each task depends on the previous output?
You don't say what version of make you're using, but I'll assume GNU make. There are a few ways of doing things like this; I wrote a set of blog posts aboutmetaprogramming in GNU make(by which I mean having make generate its own rules automatically).If it were me I'd probably use theconstructed include filesmethod for this. So, I would have your rule above forranges.txtinstead create a makefile, perhapsranges.mk. The makefile would contain a set of targets such asa_1.txt,a_2.txt, etc. and would define target-specific variables defining the start and stop values. Then you can-includethe generatedranges.mkand make will rebuild it. One thing you haven't described is when you want to recompute the ranges: does this really depend on the contents ofa.txt?Anyway, something like:.PHONY: all all: ranges.mk: a.txt # really? why? for i in 0 25 50 75; do \ echo 'a_$$i.txt : RANGE_START := $$(($$i+1))'; \ echo 'a_$$i.txt : RANGE_END := $$(($$i+25))'; \ echo 'TARGETS += a_$$i.txt'; \ done > $@ -include ranges.mk all: $(TARGETS) $(TARGETS) : a.txt # seems more likely process --out $@ --in $< --start $(RANGE_START) --end $(RANGE_END)(or whatever command; you don't give any example).
I am attempting to do a data pipeline with aMakefile. I have a big file that I want to split in smaller pieces to process in parallel. The number of subsets and the size of each subset is not known beforehand. For example, this is my file$ for i in {1..100}; do echo $i >> a.txt; doneThe first step in Makefile should compute the ranges,... lets make them fixed for nowranges.txt: a.txt or i in 0 25 50 75; do echo $$(($$i+1))'\t'$$(($$i+25)) >> $@; doneNext step should read from ranges.txt, and create a target file for each range in ranges.txt, a_1.txt, a_2.txt, a_3.txt, a_4.txt. Where a_1.txt contains lines 1 through 25, a_2.txt lines 26-50, and so on... Can this be done?
Makefile with variable number of targets
Problem is use of pipe here, which isforking a sub-shellfor yourwhileloop and thus changes toDIRSare being made in the child shell that are not visible in the parent shell. Besidescatis unnecessary here.Have it this way:#!/bin/bash while read -r line do DIRS+="$line " echo "$DIRS" done < /root/backuplist.txt echo "$DIRS"
This question already has answers here:Variables getting reset after the while read loop that reads from a pipeline(4 answers)Closed9 years ago.I have 3 lines in file/root/backuplist.txt.The firstechoprints perfectly, but the last one prints an empty line; I'm not sure why. Somehow, the$DIRSvalue is getting unset.#!/bin/bash cat /root/backuplist.txt | while read line do DIRS+="$line " echo $DIRS done echo $DIRS
shell variable not keeping value [duplicate]
If you can't find a Powershell cmdlet that will do exactly what you want, you can always roll your own:function paste ($separator = ',') {$($input) -join $separator} & { echo foo; echo bar; echo baz; } | paste foo,bar,baz
Bash has thepaste command, which can combine lines from standard input$ { echo foo; echo bar; echo baz; } | paste -s -d , foo,bar,bazGiven this PowerShell command& { echo foo; echo bar; echo baz; }I would like to pipe to something that will create the same output. I tried with Write-Host but the Separator option was ignoredPS > & { echo foo; echo bar; echo baz; } | write-host -NoNewline -Separator ',' foobarbaz
Join lines from pipe
A pipeline feeds data to standard input. You do not get standard input as an argument. It is simply the standard input.To get what you want from that script you could use:echo hi ${1:-$(cat)}That will use the first argument if there is one and fall back to usingcatto read standard input otherwise.Ascatreads from standard input if no file arguments are supplied and produces that as output (on standard output).The${1:-...}syntax isShell Parameter Expansionforuse $1 if it has a non-empty value otherwise use ....Note: This will "hang" (incat) if no arguments are supplied and not data is supplied on standard input.
code for/usr/local/bin/sayHiecho hi $1Now in Terminal, if I runsayHi John, it will outputhi JohnIf I want to runecho John | sayHi, to have the same outputhi John, how can I do that?
Use last commands output as a pipeline input for bash shell script
I haven't used the Java version of GStreamer, but something you need to be aware of when linking is that sometimes the source pad of an element is not immediately available.If you dogst-inspect rtspsrc, and look at the pads, you'll see this:Pad Templates: SRC template: 'stream_%u' Availability: Sometimes Capabilities: application/x-rtp application/x-rdtThat "Availability: Sometimes" means your initial link will fail. The source pad you want will only appear after some number of RTP packets have arrived.For this case you need to either link the elements manually by waiting for thepad-addedevent, or what I like to do in C is use thegst_parse_bin_from_descriptionfunction. There is probably something similar in Java. It automatically adds listeners for the pad-added events and links up the pipeline.gst-launch uses these same parse_bin functions, I believe. That's why it always links things up fine.
I'm currently working on a project to forward (and later transcode) a RTP-Stream from a IP-Webcam to a SIP-User in a videocall.I came up with the following gstreamer pipeline:gst-launch -v rtspsrc location="rtsp://user:pw@ip:554/axis-media/media.amp?videocodec=h264" ! rtph264depay ! rtph264pay ! udpsink sync=false host=xxx.xxx.xx.xx port=xxxxIt works very fine. Now I want to create this pipeline using java. This is my code for creating the pipe:Pipeline pipe = new Pipeline("IPCamStream"); // Source Element source = ElementFactory.make("rtspsrc", "source"); source.set("location", ipcam); //Elements Element rtpdepay = ElementFactory.make("rtph264depay", "rtpdepay"); Element rtppay = ElementFactory.make("rtph264pay", "rtppay"); //Sink Element udpsink = ElementFactory.make("udpsink", "udpsink"); udpsink.set("sync", "false"); udpsink.set("host", sinkurl); udpsink.set("port", sinkport); //Connect pipe.addMany(source, rtpdepay, rtppay, udpsink); Element.linkMany(source, rtpdepay, rtppay, udpsink); return pipe;When I start/set up the pipeline, I'm able to see the input of the camera using wireshark, but unfortunately there is no sending to the UDP-Sink. I have checked the code for mistakes a couple of times, I even set a pipeline for streaming from a file (filesrc) to the same udpsink, and it also works fine.But why is the "forwarding" of the IP-Cam to the UDP-Sink not working with this Java-Pipeline?
GStreamer-Java: RTSP-Source to UDP-Sink
You are correct. In doubt, always make timeline diagrams showing the various pipeline stages. In this case, pictorially, here's what happens:Time moves from left to the right. The arrow crossing the table rows in the forwarding version shows where forwarding occurs.Thus, for case (a), 2 cycles are wasted; for case (b), no cycles are wasted and the pipeline is not stalled.
In the following sequence of MIPS instructions (entire program not shown):DADDUI R1, R1, #-8 BNE R1, R2, LoopI want to confirm the number of stalls required between the two instructions (in context of 5 stage MIPS pipeline - IF, ID/Reg, EX, MEM, WB) with and without forwarding.My understanding:(a) If there is no forwarding:In this case, 2 stalls are required (in cycle 5, R1 can be read in the ID stage for second instruction using split phase access for registers)(b) If there is forwarding:In this case, no stalls are required (EX stage on second instruction in cycle 4 can get R1 - 8 forwarded from ALU result for EX stage of first instruction in cycle 3 ; this is assuming branch is checking for equality in EX stage).Can someone please let me know if the above two answers are correct.Thanks.
MIPS pipeline stalls with and without forwarding
From thetcpdumpmanual page:-lMake stdout line buffered. Useful if you want to see the data while capturing it. E.g.,''tcpdump -l | tee dat'' or ''tcpdump -l > dat & tail -f dat''.Bottom line: the output oftcpdumpis buffered - you need the-loption to have it output each packet/line immediately.
I'm very surprised by the behavior of tcpdump. I wrote simple code to do echoes like:while (n){n = fread(buf, 16, 1, stdin);printf("%s", buf);fflush(stdout);}and then i do something like$ tcpdump | ./EchoTesti get a lot of tcpdump packets in echo output suppressed until some amount of them. Why it happens?? things like$ cat file | ./EchoTestor$ tail -f file | ./EchoTest(with "$ echo "blabla" >> file)works perfectly and i get output instantly. Does somebody know how to force tcpdump do its output in pipeline as it appeares??
Tcpdump with pipeline goes slow
you could use "break" statement, it will behave as your "end-pipeline" cmdlet
Is there any way of ending a pipe operation while you are in a filter?A good use would be to get the first n elements of the list, similar to SQL's TOP statement. If we wrote a filter, call it "Top", we could retrieve the first 3 elements like so:dir | Top 3Once we have returned the first three, we don't need to process any more elements and in effect we want to be able to tell the pipePlease don't give me any more elementsI know that Select-Object has the -First argument but that is not the question I am asking.I have answered the questionPowershell equivalent to F# Seq.forallbut instead I would rather implement it as a filter which ends pipeline input. If there was an operation called "end-pipeline", I would implement ForAll as:filter ForAll([scriptblock] $predicate) { process { if ( @($_ | Where-Object $predicate).Length -eq 0 ) { $false; end-pipeline; } } end { $true; } }
Ending a Pipe Operation
#Edit to include updated code $x=import-csv c:\temp\testinput.csv function my-function{ [CmdletBinding()] param ( [Parameter(Mandatory=$false,ValueFromPipeline=$true, ValueFromPipelineByPropertyName=$true)][string]$Id, [Parameter(Mandatory=$false,ValueFromPipeline=$true,ValueFromPipelineByPropertyName=$true)] [alias("ID(RAW)")][string]$IdRaw ) begin{ #Sets up a db connection Write-Debug "Starting" } process { #Builds an insert query with csv members write-debug "IDRaw=$IDRaw" } end { #closes db connection Write-Debug "Ending" } } $x | my-functionsample file contentsID,ID(RAW),Date Time,Date Time(RAW),Type,Type(RAW) 29874,29877,4/18/2012 23:58,41018.20753, Servername1, ServernameRaw1 29875,29878,4/19/2012 23:58,41018.20753, Servername2, ServernameRaw2 29876,29879,4/20/2012 23:58,41018.20753, Servername3, ServernameRaw3
I am writing a powershell advanced function that will take input from the pipeline. More specifically, I will be piping in from import-csv. The problem is the column headers to the csv file I am using uses syntax invalid to ps. Here is what my code is likefunction my-function{ [CmdletBinding()] params ( [Parameter(Mandatory=$false,ValueFromPipeline=$true, ValueFromPipelineByPropertyName=$true)] [string]$Id = $_.ID, [(Parameter(Mandatory=$false,ValueFromPipeline=$true, ValueFromPipelineByPropertyName=$true)] [string]$IdRaw = $_."ID(RAW)", ) BEGIN{ #Sets up a db connection } PROCESS{ #Builds an insert query with csv members } END{ #closes db connection } } ID,ID(RAW),Date Time,Date Time(RAW),Type,Type(RAW) 29874,29874,4/18/2012 23:58,41018.20753, Servername, ServernameRawWhen I execute this with my csv input, the value of $Id becomes 2905, while the $IdRaw variable takes on a string representation of the entire $_ hashtable. Just to elaborate any paramter with a valid name {URL, ID, Status} all work. Any that contain a space or (RAW) receive the entire $_ variable.-Patrick
Access Pipeline Variable Attributes That Have invalid variable names
If you changepassword-for-objectso that it doesn't output a newline, you can call it with a script like:#!/bin/bash password-for-object "$1" if [ -t 1 ] then echo fiThe-tcondition is described inthe bash manualas:-t fdTrue if file descriptor fd is open and refers to a terminal.See the following question:How to detect if my shell script is running through a pipe?
I have a script namedpassword-for-objectwhich I normally run like that:$ password-for-object example.com sOtzC0UY1K3EDYp8a6ltfAI.e. it does an intricate hash calculation and outputs a password that I should use when accessing an object (for example, website) namedexample.com. I'll just double click the whole password, it gets copied into my buffer and I'll paste it into the form.I've also learnt a trick on how to use such a script without making my password visible:$ password-for-object example.com | xclipThis way output of a script ends up in X's primary buffer and I can insert it right into password field in the form and it's not shown on the screen.The only problem with this way is thatpassword-for-objectoutputs a string with trailing newline and thus "xclip" always catches up an extra symbol - this newline. If I omit output of newline inpassword-for-object, then I'll end up with messed up string without xclip, i.e. when I'm just putting it on the stdout. I use 2 shells: zsh and bash, and I'll get the following in zsh (note the extra % sign):$ password-for-object example.com sOtzC0UY1K3EDYp8a6ltfA% $Or the following in bash (note that prompt would be started on the same line):$ password-for-object example.com sOtzC0UY1K3EDYp8a6ltfA$Any ideas on how to work around this issue? Is it possible to modify the script in a way so it will detect that xclip is in the pipeline and only output newline if it isn't?
Shell script pipeline: different behavior with or without xclip
I believe you can achieve this with aCommand Template. Chapter 4 of the Data Definition Cookbook (PDF link) describes how you write these commands.
I would like to add functionality to the Sitecore Content Editor. I want to perform some action when a developer adds an item through the content tree. I understand I can create an event handler (e.g. OnItemCreating) which all works. The problem is I need user input at this point. Byat this pointI meanOnItemCreating, so the input needs to be therebeforethe item is created.Are events capable of retrieving user input? If so: how? If not: any suggestions on a solution for the above?
Sitecore: Retrieve user input during event or..?
I would suggest that you enforce that all your components implement one or more interfaces allowing other components and the framework to use those interfaces to interrogate the component on what it can send and what it can receive.This will make your code more robust, and require less magic to work.
How would you suggest going about implementing the following scenario. I have a few thoughts, but none that satisfies my problem in totality, so I would like to get your input.I am building a type of work-flow application. The user creates a pipeline of activities that then needs to be executed. The problem I face is this. Each "widget" in the pipeline must define what it can accept as input, and what it will produce as output. It can receive any number of input "streams" and also produce multiple "streams" of output. Now the problem occurs. These need to by dynamic. For instance, someone should be able to write a plugin for the application where he defines his own widget, with inputs and outputs. But other widgets need to be able to connect to it, so that they can send their output to the new one, or receive input from it.How should one go about firstly exposing the list of acceptable inputs and outputs, and secondly, how can I calculate which method to call on the widget. For example if I want to send output from my widget to the new one, I need to be able to calculateifthere is an acceptable receiving method (in which case there could be more than one), and secondly, I need to know the method to call to give the data to.I have had a look at closure, delegates etc, which seem to be able to do what I need. Just thought I would get some more input first.Thanks.
How to expose a dynamic list of method calls in Java?
use 5.010_000; use utf8; use strict; use autodie; use warnings qw< FATAL all >; use open qw< :std :utf8 >; END { close(STDOUT) || die "can't close stdout: $!"; } if (@ARGV == 0 && -t STDIN) { # NB: This is magic open, so the envariable # could hold a pipe, like 'cat -n /some/file |' @ARGV = $ENV{ART_FILE_LIST} || die q(need $ART_FILE_LIST envariable set); } while (<>) { # blah blah blah }
I can use<>to loop there the pipeline input to a perl program. However how can I decide whether there are pipelined input, if there is no pipelined input I will use environment variable to load a file. I am trying to use:my @lines = (<>); if ($#lines == -1) { use setenv; open FILE, "$ENV{'ART_FILE_LIST'}" or die $!; @lines = <FILE>; }Obviously it doesn't work, because the program will waiting at the first line
How to know whether there are pipelines input to a perl program
What mode are you using? Integrated Pipeline or Classic? I think this will affect the answer.But essentially, you just need to make sure your StaticFiles handler is not mapped to ASP.NET.
How can I prevent certain file types from going through the ASP.NET Pipeline (hitting global.asax, etc.)?
How to keep images (and other files) out of ASP.NET Pipeline
Are you looking for something like this?db.collection.aggregate([ { "$match": { "unique.0": "col", "unique.1": 34875 } }, { "$group": { "_id": { "$arrayElemAt": [ "$unique", 2 ] }, "count": { "$sum": 1 } } } ])Changes:The original aggregation is missing a closing bracket for the$matchTh original aggregation references auilistin the$group, but that field doesn't exist in your sample documents.Relatedly, the way you access elements in an array while inside an aggregation is by usingthe$arrayElemAtoperator.See how it works inthis playground example
I have a (here simplified) collection of documents:{ _id: oid, unique: ['col', 34875, 34, '_some_id'], otherfields: 'some text', } { _id: oid, unique: ['col', 34875, 17, '_some_other_id'], otherfields: 'other book', } { _id: oid, unique: ['afs', 24, 1, '_some_id'], otherfields: 'other book', }So: theuniquearray is a somewhat complex identifier, containing four items, where the combination makes it unique.Now i want to group and count the items in the collection by the third elementunique.2in theuniquearray. First filtered by unique.0 and unique.1pipeline = [ {'$match': {'unique.0': 'col', 'unique.1': 34875}, {'$group': {'_id': '$uilist.2', 'count': {'$sum': 1}}}, ]Using Python Pymongo. What am i doing wrong?
Count $sum by item in array
If you want to replace all the names, you can usesetNames:a <- data.frame(x = c(1:3), y = c(4:6)) a %>% setNames(c('w', 'q')) #> w q #> 1 1 4 #> 2 2 5 #> 3 3 6This also works with the base R pipe operator|>.If you want to replace just one or two you can userenamefromdplyr(which I assume you already have loaded given that you are using%>%):a %>% rename(z = 2) #> x z #> 1 1 4 #> 2 2 5 #> 3 3 6Created on 2023-09-19 withreprex v2.0.2
This question already has answers here:Use pipe operator %>% with replacement functions like colnames()<-(4 answers)Closed6 months ago.Sample dataset is as below:a = data.frame(c(1:3), c(4:6)) #a %>% `names <-` ('w', 'q') #The code did not work...If I want to rename the columns as 'w' and 'q', what should I do to get the result by using pipeline operator instead of names(a) = c('w', 'q')?
Is there anyway to name the columns of dataframe by using pipeline operator? [duplicate]
You need to encapsulate theas.vector(.) == "character"bit, otherwise I think what is being piped forwards is just the string"character":DT[,lapply(.SD, class)] %>% {as.vector(.) == "character"} %>% which(.) #x #1On this occasion, you can see where it went wrong if you switch out the%>%for the base|>,.for_and thenquote()it (though the base pipe isn't identical to%>%it was useful to debug the issue this time).quote( DT[,lapply(.SD, class)] |> as.vector(x=_) == "character" |> which(x=_) ) ##as.vector(x = DT[, lapply(.SD, class)]) == which(x = "character")You could also avoid this by just working inside thedata.tablejargument ofDT[i, j, by]like:DT[, which(lapply(.SD, class) == "character")] #x #1
I want to filter columns of a data.table based on their attribute. The answer is actually based on Nera's answer hereConvert column classes in data.table# example DT <- data.table(x = c("a", "b", "c"), y = c(1L, 2L, 3L), z = c(1.1, 2.1, 3.1)) # filter by class with nested structure changeCols <- colnames(DT)[which(as.vector(DT[,lapply(.SD, class)]) == "character")] changeCols # filter by class with pipeline DT[,lapply(.SD, class)] %>% as.vector(.) == "character" %>% which(.) # Error in which(.) : argument to 'which' is not logical # simply break the pipeline corrects the error cols <- DT[,lapply(.SD, class)] %>% as.vector(.) == "character" which(cols)I don't understand why there's an error when I use pipeline while the error was solved when breaking the pipe. Thanks!
cannot pass argument to which with pipeline in R
Emojis are supported. Simply embed the characters directly in your.gitlab-ci.ymlfile.For example:stages: - πŸ“¦ build - 🀞 test "build πŸ— job": stage: πŸ“¦ build script: - echo 'build' "test πŸ§ͺ job": stage: 🀞 test script: - echo 'test'When viewed in GitLab, will show the emojis in the same places stage/job names are usually displayed:
Good morning,Anyone know how to add emoji in a gitlab pipeline?In my gitlab-ci.yml file, I tried:To add the emoji hard -> it does not workTo use the unicode of the emoji -> it doesn't workUse icon key -> Not workingUse emojis key -> Not workingObviously it is possible to do this:https://gitlab.com/gitlab-org/gitlab-foss/-/issues/31581#note_393070368https://github.com/yodamad/gitlab-emoji/blob/master/README.mdThanks in advance :)
Using emojis in a gitlab-ci
Just tick that highlightedprivilegedcheckbox, and save the latest configurations.You can see, it is already mentioned there that you need to enable this flag in order to build docker images inside the CodeBuild agent.
I got a Java application. In local, I can connect to Dev Container using Visual Studio Code. Now I wanna build a CodePipeline in AWS. But it displays an error like this when I tried to start docker in CodeBuild's Ubuntu standard 7.0 container:[Container] 2023/05/20 12:13:06 Running command sudo service docker start /etc/init.d/docker: 96: ulimit: error setting limit (Operation not permitted)Please help. Here's my buildspec.yml:phases: install: commands: - cat /etc/os-release - sudo apt update -y - sudo apt install -y apt-transport-https ca-certificates curl gnupg lsb-release - curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg - echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null - sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable" - sudo apt install -y docker-ce docker-ce-cli containerd.io docker-compose - sudo service docker start build: commands: - docker-compose up -dMy build image:
Running docker inside a container in AWS CodeBuild
Your system is secure for certain type of vulnerabilities. If you let your application run without touching the codebase it become less secure each and every day. So,today's security score might not be sufficient for tomorrow.Regularly checking your codebase against common vulnerabilities (by utilizing OWASP, CWE, etc.) helps you to identify potential risks. And by doing a proper risk assessment can help you to spot high risk issues. And they can be either prevented / mitigated or at least detected.So, my point is having anup-to-datelist of high risk vulnerabilities (from your system perspective) can help you to take actions and keep your system secure for the newest Common Vulnerabilities and Exposures (CVEs) as well.
I want to start measure the software security, meaning that, I want to understand if my application is secure or not and improve from month to month.It would be also really useful to have some suggestion of the tools.I use sonar for detecting the vulnerabilities but it is not enough, because I am not able to see the progress and effort spend on it. For example, I can see I have 10 major vulnerabilities but I am not sure what can I measure instead of number of vulnerabilities
How to measure software security?
came across your post looking for a similar situation.Here I post you what I did to make it work, hope it helps for your case:... terraform format: stage: validate script: - gitlab-terraform fmt parallel: matrix: - TARGET: test - TARGET: qa environment: name: $TARGET action: verify rules: - changes: - terraform/$TARGET/**/* ...With that code, I manage to create specific jobs for a given env(TARGET) only if the terraform/$TARGET/**/* folder contains modifications.Hope it helps!
Say, for example, my pipeline contains the following job:sast-container: <<: *branches allow_failure: true parallel: matrix: - CI_REGISTRY_IMAGE: $CI_REGISTRY_IMAGE/address-check-stub - CI_REGISTRY_IMAGE: $CI_REGISTRY_IMAGE/manual-service-stub - CI_REGISTRY_IMAGE: $CI_REGISTRY_IMAGE/employment-record-stuband I want each of the jobs in the matrix if and only if there has been a change to the code that effects them.I was thinking along the lines of something like this:sast-container: <<: *branches allow_failure: true parallel: matrix: only: changes: - stub-services/address-check-stub - CI_REGISTRY_IMAGE: $CI_REGISTRY_IMAGE/address-check-stub only: changes: - stub-services/address-check-stub - CI_REGISTRY_IMAGE: $CI_REGISTRY_IMAGE/manual-service-stub only: changes: - stub-services/address-check-stub - CI_REGISTRY_IMAGE: $CI_REGISTRY_IMAGE/employment-record-stubwhich, of course, to nobody's surprise (including my own), doesn't work.
In the Gitlab CI pipeline, how can I conditionally run jobs in parallel?
You can usesys.exit(is_word_found). But remember you use sys module (import sys).Like this:import sys word = "test" def check(): with open("tex.txt", "r") as file: for line_number, line in enumerate(file, start=1): if word in line: return 0 return 1 is_word_found = check() sys.exit(is_word_found)However, you have a lot of other options too. check out this:https://www.geeksforgeeks.org/python-exit-commands-quit-exit-sys-exit-and-os-_exit/
I have a python file that opens and checks for a word. The program returns 0 if pass and 1 if fails.import sys word = "test" def check(): with open("tex.txt", "r") as file: for line_number, line in enumerate(file, start=1): if word in line: return 0 return 1 is_word_found = check() # store the return value of check() in variable `is_word_found` print(is_word_found)output 1I have gitlab-ci.yml that runs this python script in a pipeline.image: davidlor/python-git-app:latest stages: - Test Test_stage: tags: - docker stage: Test script: - echo "test stage started" - python verify.pyWhen this pipeline runs the python code prints 1 that means the program failed to find the test word. But the pipeline passes successfully.I want to fail the pipeline if the python prints 1. Can somebody help me here?
How to fail a gitlab CI pipeline if the python script throws error code 1?
You are looking for aresource_group:Seehttps://docs.gitlab.com/ee/ci/resource_groups/for detail
I have one branch in gitlab and 2 people committed in the same time in the same branch , so i want to prevent gitlab CI/CD pipeline to work parallel , the first committed should work first on pipeline and after the first pipeline finished ., the second pipeline startCould you please help me to how to do this ?
How To Prevent CICD Pipeline
The NER component requires a tok2vec (or Transformers) component as a source of features, and will not work without it.For more details about pipeline structure and feature sources,this section of the docsmay be helpful.
I'm currently training a model for named entity recognition and I could not find out how the pipeline in spacy should be structured in order to achieve better results. Does it make sense to use tok2vec before the ner component?
Should I use tok2vec before ner in spacy?
No, state is local to each statefulDoFn, and it is also actually local to each key (and window, if you are using a window) inside thatDoFn.
IsBeam Stateshared across different DoFns?Lets say I have 2 DoFns:StatefulDoFn1: { myState.write(1)}StatefulDoFn2: { myState.read() ; do something ... output}And then the pipeline in pseudocode:pipline = readInput.........applyDoFn(StatefulDoFn1)......map{do something else}.......applyDoFn(StatefulDoFn2)If I annotate myState identically in both StatefulDoFns - will what I write in StatefulDoFn1 be visible to StatefulDoFn2 , we implemented a pipeline with the assumption the answer is Yes ---- but it seems to be no
Sharing Beam State across different DoFns
You are getting this error because the emulator executable is not set in thePATH. Try something like the below.Try setting your PATH variable to add emulator executable.environment { PATH = "/PATH_EMULATOR/bin:${env.PATH}" }or something like the below.withEnv(["PATH+EMULATOR=/PATH_EMULATOR/bin"]) { sh('emulator -list-avds') }or you can also use the full qualified path to the executablesh('/PATH_TO_EMULATOR/bin/emulator -list-avds')
MacOS Monterey version 12.4I'm trying to run a simple pipeline script on my jenkins jobpipeline { agent any stages { stage('Build') { steps { sh('emulator -list-avds') } } }But it throws an error:/Users/<my_username>/.jenkins/workspace/<my_job_name>@tmp/durable-22217e91/script.sh: line 1: emulator: command not foundMy question is: why is it executing commands in the tmp folder? Anything "emulator" related does work when I run commands via terminal.Following this answer, I've confirmed I'm in the correct dirWhy Jenkins mounts a temporary volume in addition to the workspace?
Jenkins pipeline job executing commands in tmp folder
The condition needs to be[ { "drop": { "if": "ctx.service.environment == 'ct1'", "description": "Drop documents that contain 'service.environment' == ct1" } } ]
I am using Elastic APM, but want to drop some data based on service.environment, but seems it not to be working. below is the code under ingest pipeline[ { "drop": { "if": "'service.environment' == 'ct1'", "description": "Drop documents that contain 'service.environment' == ct1" } } ]
Drop Processor on APM-Elastic
you are trying to pass a String to a List. You could try again by making Pipelet Input Parameter "SKUs" type "java.util.List<java.util.String>".
This is my codePipelinePipelets javaPipelets XMLI'm passing a parameter(SKUs) value into a pipeline from js. The value is getting passed into the pipeline but not into pipelets. It gives me parameterSKUsin not available.Thanks in Advance
Pipelet Input Parameter missing
There isn't a built-in I/O for Apache POI:https://beam.apache.org/documentation/io/built-in/.You can follow this instruction to create a customize sourcehttps://beam.apache.org/documentation/io/developing-io-java/if needed.You can also write some ParDo withstateful processingto control your throughput.A state is per-key-per-window, so as long as you limit your pipeline to a single key (or desired number of "process"es as you mentioned) and using the default global window for batch, you can limit the parallelism.
We have multiple spreadsheets. each has 2 to 4 sheets that describe entities. rows in sheet 1 map to rows in sheet 2 and so on.Can use APACHE Poi to read and map these to my POPJOs. I tried with Open CSV but that has CSV support and not POI as a reader?Also when processing this data, need to call an API with attachments. We were planning to use Apache beam direct runner, and able to do a POC. only thing is that not sure how to throttle the processes. so only 1 record is processed every minute?We have 4 attributes to look at : job product, tenant, sub type and priorityWould like limits at product, tenant and sub type level. Meaning even if a tenant can have 10 processes, if the product has a limit of 15 and there are already 15 of other tenants running, then make this wait. We can use a SINGLE_BEAM for 1 upload but how to limit across all jobs?We have a classes to represent jobs, tasks (rows inside a file or some other sub unit of work), tenants, to track status and countsWe can write our own Java code to enforce this, but not sure where to plug this in to Apache Beam ?
Bulk ops in Apache beam Java with native throttling
As of this writing (redis-py 4.3.1) there exists another pipeline object on the timeseries class itself. The following will work:import redis r = redis.Redis() pipe = r.ts().pipeline() pipe.add("TS1", 1, 123123123123) pipe.add("TS1", 2, 123123123451) ... pipe.add("TS1", 15, 123123126957) pipe.execute()
I am looking to use pipeline to insert data into a redis Timeseries but cannot find a way to ts.add via pipeline.I can do basic example with get / set:import redisimport jsonredis_client = redis.Redis(host='xxx.xxx.xxx.xxx', port='xxxxx', password='xxxx')pipe = redis_client.pipeline()pipe.set(1,'apple')pipe.set(2,'orange')pipe.execute()I cant find a way to insert into a timeseries:import redisimport jsonredis_client = redis.Redis(host='xxx.xxx.xxx.xxx', port='xxxxx', password='xxxx')pipe = redis_client.pipeline()pipe.ts.add(TS1,1652683016,55) #<----- this is what I want to do!pipe.ts.add(TS1,1652683017,59) #<----- this is what I want to do!pipe.execute()
Redis Timeseries Pipeline with Python
I have used pipelines recently for data exploration purposes. I wanted to random search different pipelines.This could be at least one reason to use pipelines.But you are right pipelines aren't verry useful for many other purposes.
A noob question.As I understand, the pipeline library of scikit learn is a kind of automation helper, which brings the data through a defined data processing cycle. But in this case I don't see any sense in it.Why can't I implement data preparation, model training, score estimation, etc. via functional or OOP programming in python? For me it seems much more agile and simple, you can control all inputs, adjust complex dynamic parameter grids, evaluate complex metrics, etc.Can you tell me, why should anyone use sklearn.pipelines? Why does it exist?
Scikit learn pipelines. Does it make any sense?
Errors thrown in astream.pipeline()may be captured in the try-catch by promisifying the pipeline and awaiting the streamed data to be consumed entirely.import stream from "stream"; import util from "util"; try { // ... truncated non-relevant code await util.promisify(stream.pipeline)( userStream, split2(), async function( source ){ for await (const userId of source){ console.log(userId) } } ) } catch (error) { // Catch errors that occur during the execution of pipeline() }Note thatstreamandutilare built-in Nodejs libraries.
Below is the code snippet where a huge file with uuids I want to read line by line and do processing on it.try { const downloadedFile = this._s3client.getObject(params) const userStream: Readable = await downloadedFile.createReadStream() for await (const userId of pipeline(userStream, split2(), piplineCallback)) { console.log(userId) } } catch (error) { console.log('Error should be caught here.') throw error } const piplineCallback = (err) => { if (err) { logger.error({ exception: err }, 'Error occured in splitted userstream pipeline.') throw new InternalServerError(err) } else { logger.debug('Userstream pipeline succeded.') } }When there is an error thrown from pipeline API it does not goes into catch block but rather reaches my server index.ts where it gets caught in 'uncaughtException' event.I am looking for a way to be able to catch all the stream errors in my class's try catch block as I need to do some special handling there.
How to catch the errors thrown from stream.pipeline API in try catch block in Nodejs v12.x
You can achieve this using a dynamic pipeline as follows:Create a Config / Metadata table in SQL DB wherein you would place the details like source table name, source name etc.Create a pipeline as follows:a) Add a lookup activity wherein you would create a query based on your Config tablehttps://learn.microsoft.com/en-us/azure/data-factory/control-flow-lookup-activityb) Add a ForEach activity and use Lookup output as an input to ForEachhttps://learn.microsoft.com/en-us/azure/data-factory/control-flow-for-each-activityc) Inside ForEach you can add a switch activity where each Switch case distinguishes table or sourced) In each case add a COPY or other activities which you need to create file in RAW layere) Add another ForEach in your pipeline for Processed layer wherein you can add similar type of inner activities as you did for RAW layer and in this activity you can add processing logicThis way you can create a single pipeline and that too a dynamic one which can perform necessary operations for all sources
I wanted to achieve an incremental load/processing and store them in different places usingAzure Data Factoryafter processing them, e.g:External data source (data is structured) -> ADLS (Raw) -> ADLS (Processed) ->SQL DBHence, I will need to extract a sample of therawdata from the source, based on the current date, store them in an ADLS container, then process the same sample data, store them in another ADLS container, and finally append the processed result in aSQL DB.ADLS raw:2022-03-01.txt2022-03-02.txtADLS processed:2022-03-01-processed.txt2022-03-02-processed.txtSQL DB:All the txt files in the ADLS processed container will be appended and stored insideSQL DB.Hence would like to check what will be the best way to achieve this in a single pipeline that has to be run in batches?
Multi Step Incremental load and processing using Azure Data Factory
Is there a way to add comments or just text to approval in azure pipelines?Yes, you add comments or text which is optional for approving the azure pipelines.And if so, is it possible to show variables that are used in the run? I need to display some variables in the window below.Checks can be configured on environments, service connections, repositories, variable groups, secure files, and agent pools.As mentioned in thedocumentation, by default, only predefined variables are available to checks. You can use a linked variable group to access other variables.
Is there a way to add comments or just text to an approval in azure pipelines? And if so, is it possible to show variables that are used in the run? I need to display some variables in the window below.Ignore the red part
Comments on approvals in azure pipeline
I resolvedthe above build issue by updating the pipeline
in my pipeline build .yml I havebuildConfiguration: "Release"and use it in the build commandtask: DotNetCoreCLI@2 displayName: Build inputs: command: build projects: "**/*.csproj" arguments: '--configuration $(buildConfiguration)I got6.0.101\Roslyn\Microsoft.CSharp.Core.targets(75,5): Warning MSB3052: The parameter to the compiler is invalid, '/define:$(BUILDCONFIGURATION)' will be ignored.in NET 6.0 build. Any suggestions for fix this warning?
Resolve the build waring The parameter to the compiler is invalid, '/define:$(BUILDCONFIGURATION)' will be ignored in NET 6.0
You can set the pipeline up in full with deployment to each environment, but set the release to require manual intervention usingwhene.g.deploy to dev stage: dev_test script: - deploy... when: manual deploy to staging stage: staging_release script: - deploy... needs: - deploy to dev when: manualThe deployment will need to be manually triggered from the pipeline screen before it will take place
In Heroku I can set up dev > staging > prod pipeline, and i.e. when tester says it is ready to get to staging, he moves commit from dev to staing with pressing "Promote". Is it available in GitLab?So withouth touching git, just move a branch forward.
How to promote in GitLab from dev to staging?
Your URL is correct. Just check the following and then it should work:Add the MSI of the workspace to the workspace resource itself with Role = ContributorIn the web activity, set the Resource to "https://dev.azuresynapse.net/" (without the quotes, obviously) This was a bit buried in the docs, see last bullet of this section here:https://learn.microsoft.com/en-us/rest/api/synapse/#common-parameters-and-headersNOTE:the REST API is unable to cancel pipelines run in DEBUG in Synapse (you'll get an error response saying pipeline with that ID is not found). This means for it to work, you have to first publish the pipelines and then trigger them.
I have a pipeline I need to cancel if it runs for too long. It could look something like this:So in case the work takes longer than 10000 seconds, the pipeline will fail and cancel itself. The thing is, I can't get the web activity to work. I've tried something like this:https://learn.microsoft.com/es-es/rest/api/synapse/data-plane/pipeline-run/cancel-pipeline-runBut it doesn't even work using the 'Try it' thing. I get this error:{"code": "InvalidTokenAuthenticationAudience", "message": "Token Authentication failed with SecurityTokenInvalidAudienceException - IDX10214: Audience validation failed. Audiences: '[PII is hidden]'. Did not match: validationParameters.ValidAudience: '[PII is hidden]' or validationParameters.ValidAudiences: '[PII is hidden]'."}Using this URL:POST https://{workspacename}.dev.azuresynapse.net/pipelineruns/729345a-fh67-2344-908b-345dkd725668d/cancel?api-version=2020-12-01Also, using ADF it seemed quite easy to do this:https://cloudsafari.ca/2020/09/data-engineering/Azure-DataFactory-Cancel-Pipeline-RunIncluding authentication using a Managed Identity, which in the case of Synapse I'm not too sure would resource should I use. Any idea on how to achieve what I want or if I'm doing something wrong?
Cancel Synapse pipeline from the pipeline itself
You're setting the variableCYPRESS_BASE_URLin a different shell, so whenhybris-url-mapping.shexits, it's not exported to the parent shell. It's a security feature, and it can't be changed.The only way is to echo it and perform command substitution:$ CYPRESS_BASE_URL=$(./hybris-url-mapping.sh)or:$ CYPRESS_BASE_URL=`./hybris-url-mapping.sh`After this line, you will have whatever you echoed inhybris-url-mapping.shinCYPRESS_BASE_URL.
we are using Cypress and Bitbucket pipelines. We have several development environments and I want to change the baseurl dynamically depending on which branch I select via Bitbucket pipelines.the script for running the Cypress commands in Bitbucket pipelines looks like this:script: - ./hybris-url-mapping.sh $CYPRESS_BASE_URL - cd /working - npx cypress run --env baseUrl=$CYPRESS_BASE_URLthe script (./hybris-url-mapping.sh) which should change the URL based on the selected branch looks like thisif [[ $BITBUCKET_BRANCH == 'develop' ]]; then CYPRESS_BASE_URL="develop_url" ; fi if [[ $BITBUCKET_BRANCH == 'staging' ]]; then CYPRESS_BASE_URL="staging_url" ; fi if [[ $BITBUCKET_BRANCH == 'staging2' ]]; then CYPRESS_BASE_URL="staging2_url" ; fi if [[ $BITBUCKET_BRANCH == 'release' ]]; then CYPRESS_BASE_URL="prod_url" ; fi echo $CYPRESS_BASE_URLnow we come to the problem. Unfortunately it does not work. At least inecho $CYPRESS_BASE_URLthe correct URL is returned based on the branch but it is not set asbaseUrlin the commandlinenpx cypress run --env baseUrl=$CYPRESS_BASE_URL. There the url in thecypress.jsonis always used:{ "baseUrl": "local_url" }i have triednpx cypress run --env baseUrl=$CYPRESS_BASE_URLandnpx cypress run --config baseUrl=$CYPRESS_BASE_URLbut neither works. What am I doing wrong?
Cypress: Change baseUrl with bitbucket pipeline (command line)
You can define your docker container in theagent sectionat the top-level of your Jenkinsfile :pipeline { agent { docker { image 'terraform-image' } } stages { .... } }Each stage will run in the same docker container and you will not lose your workspace between each stages.
Is there a way todefine the container where your entire pipeline stages and steps can run, from start to finish, without having to fire up a container each time you want a stage to run?The reason for this is that I am runningterraform, which requires a set of steps to execute before running the deploy (init, plan, apply). One could, of course, run each stage separately and call docker to run a container each time but that would be counter intuitive. Moreso data would be lost, unless you saveterraform planoutput as an artifact, for example.. which is like scratching your left ear with the right hand, around your head.I'd like to run the container at the top, then build each stage inside, without killing it.
Way to run an entire Jenkins pipeline, start to finish, inside a Docker container?
Try to print all files of machine, call:apt-get install -y tree treeAlso cache will fill after success complete of pipelineOr yoursrc/ui/node_modulesfolder is empty
I have a project with two sub folders. Here's the structure:src |- api |- ui |-node_modules bitbucket-pipelines.ymlSince my node modules folder is not in the same directory as the pipeline file, I tried creating a definition and included the path. Here's where I'm trying to cache it in my pipeline:pipelines: default: - step: name: "Frontend" image: node:14.17.1 caches: - frontendnode script: - cd src/ui - npm install - CI=false npm run build definitions: caches: frontendnode: src/ui/node_modulesMy guess is that my definition is wrong but I have tried multiple things and I'm getting the same error:Assembling contents of new cache 'frontendnode' Cache "frontendnode": Skipping upload for empty cacheThanks!
Bitbucket pipeline custom cache definition not working
Assuming, you want to copy file into another gitlab project. Once artifact is built in a job, in a stage (ie : build), it will be available in all jobs of the next stage (ie : deploy).build-artifact: stage: build script: - echo "build artifact" artifacts: name: "public" paths: - "public/" deploy_artifact: stage: deploy script: - cd public/ #Β here are your artifact files. - ls -l - cd .. #Β now implement copy strategy, # for example in a git repo : - git clone https://myusername:[emailΒ protected]/mygroup/another_project.git - cd another_project - cp ../public/source_file ./target_file - git add target_file - git commit -m "added generated file" - git pushIf your destination is not a git repository, you can simply replace by a scp directly to server, or any other copy strategy, directly in the script.Another way to do it is to get the last artifact of project A, inside the ci of Project B. Please seehttps://docs.gitlab.com/ee/ci/pipelines/job_artifacts.html#retrieve-job-artifacts-for-other-projects
i go a CICD that make an artifact and save them in public folder i need to copy some files of these artifacts to another project what script i can't use or what the best way to do it?
How to copy files from projects artifact to another project using cicd
"Everything in this design sound doable, but also sounds like I "inventing the wheel" instead of using an existing solution that allow those option"Microsoft's ownDataFlowbasically provides this functionality with very convenient options to parallelize steps etc.Let's break it down:Solve an equationThat sounds like a job for aTransformBlock.You set it up to use a transformation method that will take an (let's call it)TInputtype and producesTOutput(Result of Equation).Send an EmailI'd break this up into two more Blocks:TransformTOutputto an EmailSend EmailSo you have one moreTransformblock<TOutput , Email>and oneActionBlock<Email>(I am using "Email" like a type here. It's just a placeholder. The exact type of course depends on Email Framework in use.)Put it all togetherYou then build your pipeline by "linking"TransformBlock<TInput, TOutput>=>TransformBlock<TOutput, Email>=>ActionBlock<Email>.Having done that, you have set up a complete Pipeline to wich you can submitTInputs and the framework will take care of the rest. Each block can beconveniently configuredto for example process severalTInputin parallel etc.It also let's you decide if you want to use synchronous or asynchronous (Task/await) API.
BackgroundI have set of tools\solution that can be combined together into a one single data processing\action flow.Each unit on my flow do a calculation or do an action.Example:Solve equation->send emailIn this example, theSolve equationunit is a type of calculation unit. While thesend emailunit is action.The point that I have 100 different units that can be combined together on a different order.The QuestionIn order to solve this problem, I planning to create a data flow for my application. Each flow will implement this interface:public interface IFlow { public IUnit[] UnitsChain{get;} public void Start(string input); }While my unit will implement this interface:public interface IUnit { public string /*output*/ Process(string input); }Everything in this design sound doable, but also sounds like I "inventing the wheel" instead of using an existing solution that allow those option.Looking to better solution to implement this kind of custom pipeline processing.Thanks!
Designing pipeline system
@Domscheit's answer works if the hexadecimal string does not contain the '0x' in front of it; if it does, then modify the function posted by @Domscheit as follows:function baseToDecimal(input, base) { // works up to 72057594037927928 / FFFFFFFFFFFFF8 var field = input; return { $sum: { $map: { input: { $range: [0, { $strLenBytes: field }] }, in: { $multiply: [ { $pow: [base, { $subtract: [{ $strLenBytes: field }, { $add: ["$$this", 1] }] }] }, { $indexOfBytes: ["0123456789ABCDEF", { $toUpper: { $substrBytes: [field, "$$this", 1] } }] } ] } } } }; }and call it as follows, using thereplaceOnefunction to remove thexdb.collection.aggregate([{$set:{lHex: {$replaceOne: {input:"$hex", find:"x", replacement: "0"}}}}, {$set: {decimal: baseToDecimal("$lHex", 16)}}])
I have a collection that has strings such as this:left_eye_val : "0x0", right_eye_val : "0x2"I'm setting up some derivative fields as part of a aggregation pipeline which at its first stage must convert the hexadecimal strings "0x0", "0x2" into numbers.The operator I tried:{$toInt:"$left_eye_val"}returnsIllegal hexadecimal input in $convert with onError value:0x0Is there a way to convert these strings into numbers using built-in mongodb operators? If not, what are some of the ways one might accomplish this?
How do you convert a hexadecimal string into a number in mongodb?
No need to use pipeline. Instead, you can useBF.MEXISTScommand to check multiple items:BF.MEXISTS key item1 item2 item3
Redis has a module for working with bloom filters:https://oss.redislabs.com/redisbloom/Bloom_Commands/Redis also allows for pipelining of commands:https://redis.io/topics/pipeliningSpecifically I am looking to check for the existence of a long list of items in a bloom filter. In the current implementation this requires me to issue N individual requests, one for each item in the checklist.Looking for client code examples to pipeline a batch ofBF.EXISTScalls in one network request.
Is it possible to pipeline Redis bloom filter commands?
On left hand side panel Project --> Setting --> integrations -->Pipelines emailsTick active checkbox and add recipients then save the changes.
After completion of pipeline committer get a mail from gitlab of its success or failure with all information like committer, branch, commit, Project etc. But only committer is receiving this mail.Now, How to send this mail to other members of gitlab or to the common DL along with committer. Where we can set this in gitlab? Is there any setting which needs to enabled?
How to send mail from gitlab after pipeline build
If we need to do this to replace the,to blank ("") on all thecharactercolumns, usemutatewithacrossasgsub/subetc works onvectoras input and not ondata.framelibrary(stringr) library(dplyr) df1 <- df %>% mutate(across(where(is.character), ~ as.numeric(str_remove_all(., ','))))If we want to exclude the second columndf1 <- df %>% mutate(across(c(where(is.character), -2), ~ as.numeric(str_remove_all(., ','))))Note thatselect_iforselect(where, will only select those columns from the original data. If the intention is to replace the,in the original dataset columns, usemutatewithacrossdatadf <- structure(list(col1 = 1:5, col2 = c("a", "b", "c", "d", "e"), col3 = c("1,2", "1,5", "1,3", "1,44", "1,46"), col4 = c("1,2", "1,5", "1,3", "1,44", "1,46")), class = "data.frame", row.names = c(NA, -5L))
I have a dataset of multiple types. It was created in an Excel spreadsheet so some numbers contain commas (e.g. 1,346 instead of 1346). Hence, making them of type character instead of numeric.Here's what I attempted to make the conversion:df[-2] %>% select_if(is.character) %>% as.numeric(gsub(",", "", df))I am excluding the second column from the selection as it is a valid character type for my analysis.The error I am getting is:Error in df[-2] %>% select_if(is.character) %>% as.numeric(gsub(",", : 'list' object cannot be coerced to type 'double'How could I make this work?
R: Error converting character type to numeric type when using select_if() and gsub()
That the transform function is not called again following the 16th call is a hint. The reason is given here:https://nodejs.org/api/stream.html#stream_new_stream_writable_optionsInobjectModethe transform has a defaulthighWaterMarkof 16. Because the transform function is also "pushing" data to it's own readable via the callback, this is causing that readable buffer to become full (as there is no further writable to consume that readables data). And so the stream is pausing due to backpressure that starts from the transform-readable portion of the stream and "flows" up to the transform-writable side and then on up to the original readable stream which is finally paused.transform-readable - pauses after 16 chunks pushedtransform-writable - keeps accepting chunks and buffering to the next 16 chunks from the upstream readablereadable - keeps reading until its buffer is full, the next 16 chunks, then pausesSo the original readable will pause by default after 16 * 3 chunks, or 48 reads.When not inobjectModethehighWaterMarkfor the buffer/string is 16384 bytes (although setEncoding can change the meaning of this).
Why does pipeline never call its callback? Also the transform function stops being called after 16 chunks.eg:const { Readable, Transform, pipeline } = require('stream'); const readable = new Readable({ objectMode: true, read() { this.push(count++); if (count > 20) { this.push(null); } } }); const transform = new Transform({ objectMode: true, transform(chunk, encoding, callback) { data.push(chunk); console.log('transform - chunk: ', chunk.toString()); callback(null, chunk); } }); let count = 1, data = []; pipeline( readable, transform, (error) => { if (error) console.log('pipeline callback - ERROR: ', error); else console.log('pipeline callback - data: ', data); } );
Nodejs stream behaviour, pipeline callback not called
You can do it in Azure Devops. Try put this YML in Azure DevOps Pipelinetrigger: - main pool: vmImage: 'ubuntu-latest' name: 'yourApplicationName' steps: - task: NodeTool@0 inputs: versionSpec: '12.x' displayName: 'Install Node.js' - script: | npm install -g @angular/cli npm install displayName: 'npm install' workingDirectory: '$(Build.SourcesDirectory)/yourApplicationName' - task: AzureStaticWebApp@0 inputs: app_location: "/yourApplicationName" api_location: "api" app_build_command: $(build_command) output_location: "dist/yourApplicationName" env: azure_static_web_apps_api_token: $(deployment_token)You can put build_command and deployment_token in the variables
Azure static web apps(preview) currently only works with a github account, and as a company policy we have to use Azure for repos and everything else (pipelines, releases, etc..) We are going to use the static web app just for viewing a simple angular website however all the source code must remain in the azure devops repo.Is is possible to create a private github account and upload to it only the compiled angular files to make use of the static web app? for example we already have a pipeline to compile and deploy the angular website to an Azure web app service, can this pipeline be modified to publish the same files to the github account? and if so how?
Azure static web apps
This is the error message given byscikit-learn's version of the pipeline. Your code, as is, should not produce this error, but you probably have runfrom sklearn.pipeline import Pipelinesomewhere which has overwritten thePipelineobject.From a methodological point of view, I nonetheless find it questionable to use a sampler after the preprocessing and feature selection in a general setting. What if the features you select are relevant because of the imbalance in your dataset? I would prefer using it in the first step of a pipeline (but this is up to you, it should not cause any errors).
from imblearn.pipeline import Pipeline from imblearn.over_sampling import SMOTE smt = SMOTE(random_state=0) pipeline_rf_smt_fs = Pipeline( [ ('preprocess',preprocessor), ('selector', SelectKBest(mutual_info_classif, k=30)), ('smote',smt), ('rf_classifier',RandomForestClassifier(n_estimators=600, random_state =2021)) ] )i am getting below error: All intermediate steps should be transformers and implement fit and transform or be the string 'passthrough' 'SMOTE(random_state=0)' (type <class 'imblearn.over_sampling._smote.SMOTE'>) doesn'tI believe smote has to be use post feature selection process. Any help on this would be very helpful.
how to use SMOTE & feature selection together in sklearn pipeline?
The solution is to change the yaml pipeline by adding distributionBatchType: 'basedOnAssembly' as seen below. Turning on the diagnostics also fixed this issue, but was not an ideal solution.- task: VSTest@2 inputs: testSelector: 'testAssemblies' testAssemblyVer2: | **\Tests\** !**\*TestAdapter.dll !**\obj\** searchFolder: '$(System.DefaultWorkingDirectory)' diagnosticsEnabled: false distributionBatchType: 'basedOnAssembly'
The unit tests calls the following method. It passes when run locally but fails when run as part of Azure DevOps build pipeline due to a LoaderException on assembly.GetTypes(). I'm not sure how to debug this because it doesn't happen locally. Normally I would run through this in debug mode and look at the LoaderException. The pipeline task VSTest@2 only logs the stacktraceAssembly[] assemblies = AppDomain.CurrentDomain.GetAssemblies(); foreach (Assembly assembly in assemblies) { if (assembly != null) { Type[] types = assembly.GetTypes(); } }Test method Tests.Data.IBlockConversionTests.TestIBlockConversion threw exception: System.Reflection.ReflectionTypeLoadException: Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information. Stack Trace: at System.Reflection.RuntimeModule.GetTypes(RuntimeModule module) at System.Reflection.RuntimeModule.GetTypes() at System.Reflection.Assembly.GetTypes() at Plugin.PluginLoader`1.LoadPlugins(IEnumerable`1 assemblies) in D:\a\1\s\Core\Plugin\Plugin\PluginLoader.cs:line 80 at Tests.Data.IBlockConversionTests.TestIBlockConversion() in D:\a\1\s\Tests\Data\IBlockConversionTests.cs:line 42
UnitTesting assembly.GetTypes() throws an exception in the build pipeline only
The term 'Build.DefaultWorkingDirectory' is not recognized as the name of a cmdletSince you are using CD Process to start azure cli deployment, it seems that you are using theRelease Pipeline.Based on my test, I could reproduce this issue in Release Pipeline.The root cause of this issue is that the variableBuild.DefaultWorkingDirectorycouldn't be used in Release Pipeline.This variable only could be used in Build Pipeline(CI Process).To solve this issue, you could try to use the variable:$(system.DefaultWorkingDirectory).Extract files TaskAzure CLI script sample:az storage blob upload-batch --account-name $(Name) --account-key $(key) -s $(system.DefaultWorkingDirectory)/$(build.buildid) --pattern * -d xxxHere is a Doc aboutthe variables in Release Pipeline.On the other hand, in addition to Azure Cli scripts, you can also directly useAzure file copy task. It will be more convenient.
I am trying to stand up a simple CI/CD process for our single page React application. The CI has been set up to npm build the React application and publish the build archive (.zip) to the $(Build.ArtifactStagingDirectory).The CD process is then triggered to extract the build from the artifact .zip file and utilize the AZ CLI task to push the files to the Azure Blob Storage account hosting the static website.The deployment completes successfully, however no files are published to the blob storage account. I was assuming that it had something to do with the --source flag of the AZ CLI command and tried to utilize the $(Build.DefaultWorkingDirectory)/$(Build.BuildId) environment varialbe as the source.This however caused the deployment to fail with the following error.The term 'Build.DefaultWorkingDirectory' is not recognized as the name of a cmdletI am unsure of how the DevOps directories are structured or how the AZ CLI interacts with them via the --source flag but any tips or suggestions would be greatly appreciated.
Azure DevOps Pipeline AZ CLI upload-batch not pushing files
If I understand correctly, the!referencekeyword in Gitlab may help you, but probably not fully resolve your issue:https://docs.gitlab.com/ee/ci/yaml/yaml_optimization.html#reference-tagsYoursecurity-stepcould look like this:security-step: stage: test script: - !reference [actual_build, script] - ./security-checkThis will include all your build script lines before the security step.
here is my problem: I have a GitLab-ci.yml template in a separate repository with all basic stages/steps in it. In another repo with the actual code, I include that template and overwrite a variable for unit testing for example.Here is how the base template looks like:--- image: docker:git variables: TEST_CMD: unit-test $UNIT_TEST_CMD_OPTIONS stages: - pre-build - build - test actual_build: stage: build <<: *only-default script: - echo "build it" - ... ....Now I include that template into my actual repo gitlab-ci.yml file to execute the whole pipeline:--- include: - project: blahfubar/gitlab-ci-template ref: master file: basic.gitlab-ci.yml variables: UNIT_TEST_CMD_OPTIONS: --testsuite unitNow the problem: I want to add an extra job just in this repo to i.e. execute a security check there, but this does not work and with the "extends:"-keyword only jobs can be extended.--- include: - project: blahfubar/gitlab-ci-template ref: master file: basic.gitlab-ci.yml variables: UNIT_TEST_CMD_OPTIONS: --testsuite unit security-step: stage: test script: - ./security-checkIf I do it like that, only the "security-step" will be executed, but not all the other steps from the included template.May anyone has a solution for that because I didn't find anything in the gitlab-ci docs.Thanks in advance :)
add gitlab-ci job when including an gitlab-ci template
This is currently as intended. GitLab have an Epic to resolve this, which you can follow here, along with the associated issues:https://gitlab.com/groups/gitlab-org/-/epics/4509https://gitlab.com/gitlab-org/gitlab/-/issues/254974
Here is a screenshot showing part of my pipeline (censored):It might be a stupid question, but why sometimes lines that are connecting the jobs are nice, like that:and sometimes they are all tangled up, like here:What does it depend on? All jobs in the stage "B" on the right have defined:needs: - A1so why job A1 is connected with B1, B2 and B3 but A2 is connected only with B1? What can I do to "untangle" those lines? That is, how to make A2 connected with nothing on the right side, or with everything - not only with B1.Here is .gitlab-ci.yml (censored/anonymised)stages: - A - B A1: stage: A script: - something cache: paths: - some_cache policy: pull needs: - some_job_from_previous_stage_not_shown_here A2: stage: A script: - something cache: paths: - some_cache policy: pull needs: - some_previous_job allow_failure: true B1: stage: B script: - something cache: paths: - some_cache policy: pull needs: - A1 B2: stage: B script: - something cache: paths: - some_cache policy: pull needs: - A1 B3: stage: B script: - something cache: paths: - some_cache policy: pull needs: - A1
Question about lines connecting jobs in graphic representation of GitLab CI pipeline
This is likely PowerShell's native command processor waiting to see if any more output is written before binding it to the downstream command.Explicitly flushing the output seems to work (tested with Python 3.8 and PowerShell 7.0.1 on Ubuntu 20.04):python3 -c "from time import *; print(time(), flush=True); sleep(3)" | python3 -c "from time import *; print(input()); print(time())"Gives me timestamps within 2ms of each otherOn Windows, theflush=Trueoption doesn't seem to alleviate the problem, but wrapping the second command inForEach-Objectdoes:python3 -c "from time import *; print(time(), flush=True); sleep(3)" |ForEach-Object { $_|python3 -c "from time import *; print(input()); print(time())" }
When running processes in a PowerShell pipeline, the next process will only start after the previous one exits. Here's a simple command to demonstate:python -c "from time import *; print(time()); sleep(3)" | python -c "from time import *; print(input()); print(time())"Which will print something like:1599497759.5275168 1599497762.5317411(note that the times are 3 seconds apart).Is there any way to make the processes run in parallel? Looking for a solution that works on either Windows PowerShell or PowerShell Core on Windows.I foundthis question, but it only deals with cmdlets, not normal executables.
How can I run PowerShell pipeline processes in parallel?
If you don’t need to edit Pipelines, I recommend VS Code with the Prophet plugin. Check it out on the VS Code plugin marketplace.https://marketplace.visualstudio.com/items?itemName=SqrTT.prophet
After setting up the configuration for pipeline and clicking on debug in eclipse, I get this error.Errors occurred during the build. Errors running builder 'Digital Server Upload' on project 'DigitalServer'. Tree element '/' not found.
Error while trying to debug demandware pipeline code using eclipse
Don't use a channel for the metadata file, just declare asmetadata = file("metadata.tsv")
I want to run an analysis multiple times with different variables on the same input from a previous processes on Nextflow:process a { output: file id, "{id}.out" into a } metadata = Channel.fromPath("metadata.tsv") vars_to_analyze = Channel.from(["var_a", "var_b"]) process b { input: tuple id, file from a file m from metadata val var from vars_to_analyze output: tuple id, path("${id}-${var}.out") into b """ command --var ${var} --metadata ${m} ${file} > ${id}-${var}.out """ }Which is the correct way to re-use metadata and file with different values?
How to combine multiple queue channels with values in Nextflow?
The question is indeed confusing, since it presents architectural models as if they were mutually exclusive (i.e. it can be at the same time layered and client-server) and relies on ambiguous terminology.When it comes to architectural diagrams, there are standard diagrams, which follow a well known formal graphical notation. Typical examples are:UMLOlder OO notation (e.g. Booch, Rumbaugh or Objectory - it's really old because these have been merged together to make UML).Non OO notations, such for example theIDEF suite(which was enriched in the meantime with an OO layer), SADT, Gane & Sarson (it's also quite old, less and less used, except in some niche markets).Among those, the only which qualifies officially and unambiguously as a standard is UML: it's the only one that is recognized by an international standard setting body (ISO/IEC 19505).But in architecture you have also a fair bunch of non-standard diagrams that convey graphically the structural intent. Typically, alayered arrangement of services, or anhexagonalor aconcentric presentationsare frequently used. Sometimes it's even more visual with clients shown as PC, and several servers in the network. All these use non-standard notations.
I am writing a documentation for my Software engineering subject. My project is on a Hospital Managements System. Here is the question that is making me confused.(2. Architectural design) Present the overall software architecture, stating whether it’s Layered, Repository, Client-Server, or Pipe and Filter architecture( – skim through pages 155 to 164 of our text reference book to see descriptions of these different architectures).Describe and present it on a standard or non-standard diagram.So what is the difference between standard and non-standard diagram?
Difference between standard and non-standard diagrams in Software engineering
If you sync and get the result after sending each command to the pipeline then there isn't much difference to sending the commands without pipelining. The benefit of pipelining comes from sending a number of commands without waiting on their responses and then reading the responses back all at once (thus eliminating a lot of time which would have been spent waiting for responses. More info:https://redis.io/topics/pipelining).So a more "pipeline-y" implementation of what you have above would look like this (please pardon any mistakes with the exact pipeline method names, I mostly use Spring Data rather than straight Jedis):List<String> keysToCheckForTtl = ... Map<String, Response> responses = new HashMap<>(); for (String key : keysToCheckForTtl) { responses.put(key, pipeline.ttl(key)); } pipeline.sync(); // get the responses back for all TTL commands for (String key : keysToCheckForTtl) { if (responses.get(key).get().equals(-1L)) { pipeline.expire(key, 15); } } pipeline.exec(); // finalize all expire commandsAs you can see, this works with a list of keys to check and expire. If you only need to check and then expire a single key, you don't need a pipeline.
I'm trying to understand how pipeline works nad want to try something different. I've noticed that there's no method for setting expire key if the key doesn't have any so I made an example for that with Jedis.ExampleMap<String, Response> responses = new HashMap<>(); long start = System.currentTimeMillis(); try (Jedis resource = redisManager.getResource()) { Pipeline pipeline = resource.pipelined(); responses.put("time", pipeline.ttl(args[1])); pipeline.sync(); pipeline.multi(); if (responses.get("time").get().equals(-1L)) { pipeline.expire(args[1], 15); } pipeline.exec(); }I'd like to know that should I use like that or do you have any idea about it? I couldn't find any solution for that.
Redis pipeline usage
For yourscoringparameter of theGridSearchCV, you can just pass e.g.f1_weightedas a string. That should do the trick. You can have a look at thesklearn docsfor possible values.
I have setup asklearn.GridsearchCVwith aPipelineas the estimator. My problem is a multiclass classification. I clearly receive this error:ValueError: Target is multiclass but average='binary'. Please choose another average setting, one of [None, 'micro', 'macro', 'weighted'].Which is because I useF1score without setting theaverageargument. My question iswhere exactly should I pass this argument to the object?my code:estimator = GridSearchCV( estimator=Pipeline(setting['layers']), param_grid=setting['hyper_parameters'], cv=cv, scoring=self.scoring, refit=self.refit_metric, n_jobs=n_jobs, return_train_score=True, verbose=True )and then:estimator.fit( self.x_train, self.y_train )The error is raised on the.fit()line, but I guess I should pass the parameter when instantiating theGridsearchCV.
sklearn: give param to F1 score in gridsearchCV/Pipeline
you just need to provide username and email before you push:git config user.email "some-email" git config user.name "some-username"
We have some scripts distributed among several Azure DevOps repos. Our goal is to :Parse all of those reposExtract help info from our scripts and generate .md filesPush those .md files to another local Azure DevOps reposWe're using a Release pipeline, with our sources repos as artifacts.How can we authenticate to this local repos to then push commits? I got following error:2020-03-31T07:35:31.9598572Z ##[error]*** Please tell me who you are. Run git config --global user.email "[emailΒ protected]" git config --global user.name "Your Name" to set your account's default identity. Omit --global to set the identity only in this repository.Is it possible to use an agent identity or something like that?
Push to local Azure DevOps Git from Release Pipeline
Use#fromBase64(string)and#readJson(string)helpers
I'm trying to use the SpEL in Spinnaker pipeline to convert an artifact from base64 to JSON because I want to retrieve some field from that manifest. Any suggestions on how to do that?
How to convert base64 artifact to json/yaml in spinnaker?
Usemapcatinstead ofmap. The difference is: map is one to one, while mapcat is one to many.
I am trying to understand how to set up a clojure pipeline that has multiple outputs per input, but so far I had no luck getting that to work.Thedocumentationfor pipeline states that[...] the transducer will be applied independently to each element [...] and may produce zero or more outputs per input. [...]However, I fail to understand how to get more than 1 output per input.I want to apply multiple transformations to the same input and put all results onto the output channel. I am sure this could also be done usingmult,tapandmerge, however, this introduces much more overhead compared to adding another transformation to a pipeline transducer.I tried it with a toy example:(def ca (chan)) (def cb (chan)) (defn f [in] in) (defn g [in] (* 2 in)) (pipeline 1 cb (map (juxt f g)) ca) (put! ca 1) (<!! cb)However, this outputs [1 2] in a single output instead of two separate outputs.So: How can I set up a clojure pipeline between two channels such that it produces multiple (>1) outputs on the output channel per input on the input channel?
How to create multiple outputs using a transducer on a pipeline?
You can use theneedskeyword. (Introduced in GitLab 12.2)From Gitlab documentation:Theneeds:keyword enables executing jobs out-of-order, allowing you to implement adirected acyclic graphin your.gitlab-ci.yml.This lets you run some jobs without waiting for other ones, disregarding stage ordering so you can have multiple stages running concurrently.Example and more details on some limitation are here:https://docs.gitlab.com/ee/ci/yaml/README.html#needs
is it possible to do something like this in GitlabCI?[prebuild] ----- [build A] --- [deploy A] \--- [build B] --- [deploy B] \-- [build C] --- [deploy C]I've looked a lot into Gitlab documentation but could not find a way to achieve this. I basically don't want my deploy stage to wait for the build stage to be done, if a single build stage is done, its deploy stage related to that build should start.A simple answer to that question could make this a single step, but I only want to deploy when tags are made. I really want a separate step, so this is not an option.
How to make a serial pipeline for multiple targets a parallel pipeline?
TheCircle CIprovides the solution throughworkspace.To shareartifactor any set of files, you need topersist_to_workspaceinto theteststage andattach_to_workspaceon thedeploystage.The only catch is to persistartifacton thedeployphase.workspaceitself is available for not more than 30 days.
I havetestanddeploypaths into the workflow and when I sequentially execute themdeployoverridesartifact.Is it possible to keep it, so it will be passed from stage to stage sequentially? StoreartifactintoSlackor any other system is not an option due to limitations in the development environment.
Is it possible to store artifact into Circle CI during the multi-stage build?
I think you are no referring to the stages of the pipeline with the correct name on the grid. The names that you assign on the pipeline (tfidf,selectkbest,linearscv) for each stage should be the same ones in the grid. I would do:pipeline = Pipeline([('tfidf', TfidfVectorizer(sublinear_tf=True)), ('selectkbest', SelectKBest()), ('linearscv', LinearSVC(max_iter=10000, dual=False))]) grid = { 'tfidf__ngram_range':[(1,2),(2,3)], 'tfidf__stop_words': [None, 'english'], 'selectkbest__k': [10000, 15000], 'selectkbest__score_func': [f_classif, chi2], 'linearscv__penalty': ['l1', 'l2'] }
I getValueError: Invalid parameter...for every line in my grid.I have tried removing line by line every grid option until the grid is empty. I copied and pasted the names of the parameters frompipeline.get_params()to ensure that they do not have typos.from sklearn.model_selection import train_test_split x_in, x_out, y_in, y_out = train_test_split(X, Y, test_size=0.2, stratify=Y) from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.pipeline import Pipeline from sklearn.feature_selection import SelectKBest, chi2, f_classif from sklearn.svm import LinearSVC from sklearn.model_selection import GridSearchCV grid = { 'TF-IDF__ngram_range':[(1,2),(2,3)], 'TF-IDF__stop_words': [None, 'english'], 'SelectKBest__k': [10000, 15000], 'SelectKBest__score_func': [f_classif, chi2], 'linearSVC__penalty': ['l1', 'l2'] } pipeline = Pipeline([('tfidf', TfidfVectorizer(sublinear_tf=True)), ('selectkbest', SelectKBest()), ('linearscv', LinearSVC(max_iter=10000, dual=False))]) grid_search = GridSearchCV(pipeline, param_grid=grid, scoring='accuracy', n_jobs=-1, cv=5) grid_search.fit(X=x_in, y=y_in)
Invalid Parameters for Sklearn GridSearchCV
You're using%(ForEach-Object)to process input from the pipeline ($dir) object by object.Inside the script block ({ ... }) that operates on the input, you must use theautomatic$_variableto reference the pipeline input object at hand- commands you useinside the script blockdonotthemselves automatically receive that object as their input.Therefore, yourcopy(Copy-Item) command:copy -Destination $destinationRoot -Recurse -Forcelacks a source argument and must be changed to something like:$_ | copy -Destination $destinationRoot -Recurse -ForceWithout a source argument (passed to-Pathor-LiteralPath) - which is mandatory -Copy-Itempromptsfor it, which is what you experienced (the default parameter is-Path).In the fixed command above, passing$_via the pipeline implicitly binds toCopy-Item's-LiteralPathparameter.
I have the following script$sourceRoot = "C:\Users\skywalker\Desktop\deathStar\server" $destinationRoot = "C:\Users\skywalker\Desktop\deathStar/server-sandbox" $dir = get-childitem $sourceRoot -Exclude .env, web.config Write-Output "Copying Folders" $i=1 $dir| %{ [int]$percent = $i / $dir.count * 100 Write-Progress -Activity "Copying ... ($percent %)" -status $_ -PercentComplete $percent -verbose copy -Destination $destinationRoot -Recurse -Force $i++I tried to reference thispost, but I ended up getting the following prompt in the powershell console.Supply values for the following parameters:Path[0]:
Command in ForEach-Object script block unexpectedly prompts for arguments
First, put the cache on the global level. This will make sure, that the jobs share the same cache.Second, you can usecache:key:filesintroduced with GitLab 12.5 to only recreate the cache when the package.json changes.cache: key: files: - package.json paths: - node_modules/ build: stage: build only: - develop script: - npm installFurther information:https://docs.gitlab.com/ee/ci/yaml/#cachekeyfilesAdditional hints:You might want to check onpackage-lock.jsoninstead ofpackage.json.I recommend reading thecache mismatch chapter in the documentationto make sure you don't run into common problems where the cache might not be restored.Instead of simply addingnpm install, you can also skip this step when thenode_modulesfolder was recreated from cache. Following bash addition to your npm install will only run the command, if the node_modules folder doesn't exist.build: stage: build only: - develop script: - if [ ! -d "node_modules" ]; then npm install; fi
I am working on the performance tuning for Gitlab pipeline usingcache.This is a nodejs project usingnpmfor the dependency management. I have put thenode_modulesfolder into cache for subsequent stages with following setting:build: stage: build only: - develop script: - npm install cache: key: $CI_COMMIT_REF_SLUG paths: - node_modules/Could I make the cache available for pipeline triggered next time? Or the cache is accessible in single pipeline?If I can access that within multiple pipeline, could I recache the node module only when we change package.json?
Gitlab pipeline: How to recache node modules only when dependency changed?
Even under a cache miss, the pipeline will go on until the RAW (read after write) dependency bites.ldr r12, [r0], #4 subs r12, r12, r1 beq end_loopThesubsinstruction cannot be executed at the same time asldrdue to the RAW dependency.Thebeqinstruction cannot be executed at the same time assubsdue to the CPSR RAW dependency.All in all, the sequence above will take 6 cycles in best case: three cycles instruction execution plus 3 cycles L1 hit latency while it will be 3 + 28 = 31 cycles in worst case (total cache miss)
I am reading ARM Cortex-A8 data sheet, in data sheet ARM stated that an Load data that missed in L2 take at least 28 core cycle to complete, now i could not imagine that during this 28 cycle CPU will stall and put bubble in pipeline or execute other instruction until this load complete? what if we have an branch based on this load result? what if we have another load just after that instruction that again missed in L2??
ARM Cortex-A8 L2 cache miss overhead
It should be possible. You need anExternalTransformand an expansion service.Seeherea test pipeline that does this:counts = (lines | 'split' >> (beam.ParDo(WordExtractingDoFn()) .with_output_types(bytes)) | 'count' >> beam.ExternalTransform( 'beam:transforms:xlang:count', None, EXPANSION_SERVICE_ADDR))Herebeam:transforms:xlang:countis a URN of a transform that should be known to the expansion service. This example uses a customexpansion servicethat expands that URN into a JavaPTransform, you can build your own along the same lines.You can see how this example is startedhere.
Is it possible to combine Java and Python transforms in Apache Beam?Here is the use case (i.e. dream plan): the raw input data has very high rate, and so some initial aggregation is needed in a reasonably fast language (e.g. Java). The aggregated values are then given to a few transforms (implemented in Python) and then passed through a stack of machine learning models (implemented in Python) to produce some predictions, which will then be utilized again in some Java code.Is it possible in Apache Beam?Thank you very much for your help!
Combining Java and Python in Apache Beam pipeline
If it's planned to be used likeStep3( Step2( Step1(filePath) ) ), thenStep2should dispose the stream. It may useyield returnfeature of c#, which creates an implementation ofIEnumerator<>underneath, which implementsIDisposable, and allows for "subscribing" for the "event" of the finishing of enumerating and callStream.Disposeat that point. E.g. :IEnumerable<RowType> Step2(Stream stream) { using(stream) using(StreamReader sr = new StreamReader(stream)) { while(!sr.EndOfStream) { yield return Parse(sr.ReadLine()); //yield return implements IEnumerator<> } } // finally part of the using will be called from IEnumerator<>.Dispose() }Then ifStep3either usesLINQbool Step3(IEnumerable<RowType> data) => data.Any(item => SomeDecisionLogic(item));orforeachbool Step3(IEnumerable<RowType> data) { foreach(var item in data) if(SomeDecisionLogic(item))) return true; }for enumerating, both of them guarantee to callIEnumerator<>.Dispose()(ref1,ECMA-334 C# Spec, ch.13.9.5), which will callStream.Dispose
Recently I started to investigate aPipelinepattern or also known asPipes and Filters. I thought it is a good way to structure the code and applications which just process the data. I used thisarticleas a base for my pipeline and steps implementation (but this is not so important). As usual blog is covering simple scenario but in my case I need (or maybe not) to work onIDisposableobjects which might travel through the process.For instance StreamsLet's consider simple pipeline which should load csv file and insert its rows into some db. In simple abstraction we could implement such functionsStream Step1(string filePath) IEnumerable<RowType> Step2(Stream stream) bool Step3(IEnumerable<RowType> data)Now my question is if that is a good approach. Because if we implement that as step after step processing theStreamobject leaves first step and it is easy to fall into a memory leakage problem. I know that some might say that I should haveStep1which is loading and deserialising data but we are considering simple process. We might have more complex ones where passing a Stream makes more sense.I am wondering how can I implement such pipelines to avoid memory leaks and also avoiding loading whole file intoMemoryStream(which would be safer). Should I somehow wrap each step intry..catchblocks to callDispose()if something goes wrong? Or should I pass allIDisposableresources intoPipelineobject which will be wrapped withusingto dispose all resources produced during processing correctly?
Pipeline pattern and disposable objects
The problem lies in your manual steps, where you do the refitting of the Scaler using test data, you need to fit it on train data and use fitted instance on test data, see here for details:How to normalize the Train and Test data using MinMaxScaler sklearnandStandardScaler before and after splitting datafrom sklearn.datasets import make_classification, make_regression from sklearn.preprocessing import MinMaxScaler, PolynomialFeatures from sklearn.model_selection import train_test_split from sklearn.linear_model import LinearRegression from sklearn.pipeline import make_pipeline X, y = make_regression(n_features=3, n_samples=50, n_informative=1, noise=1) X_train, X_test, Y_train, Y_test = train_test_split(X, y) pipeLine = make_pipeline(MinMaxScaler(),PolynomialFeatures(), LinearRegression()) pipeLine.fit(X_train,Y_train) print(pipeLine.score(X_test,Y_test)) print(pipeLine.steps[2][1].intercept_) print(pipeLine.steps[2][1].coef_) scaler = MinMaxScaler().fit(X_train) X_trainScaled = scaler.transform(X_train) X_trainScaledandPoly = PolynomialFeatures().fit_transform(X_trainScaled) X_testScaled = scaler.transform(X_test) X_testScaledandPoly = PolynomialFeatures().fit_transform(X_testScaled) reg = LinearRegression() reg.fit(X_trainScaledandPoly,Y_train) print(reg.score(X_testScaledandPoly,Y_test)) print(reg.intercept_) print(reg.coef_) print(reg.intercept_ == pipeLine.steps[2][1].intercept_) print(reg.coef_ == pipeLine.steps[2][1].coef_)
Simple example below using minmaxscaler, polyl features and linear regression classifier.doing via pipeline:pipeLine = make_pipeline(MinMaxScaler(),PolynomialFeatures(), LinearRegression()) pipeLine.fit(X_train,Y_train) print(pipeLine.score(X_test,Y_test)) print(pipeLine.steps[2][1].intercept_) print(pipeLine.steps[2][1].coef_) 0.4433729905419167 3.4067909278765605 [ 0. -7.60868833 5.87162697]doing manually:X_trainScaled = MinMaxScaler().fit_transform(X_train) X_trainScaledandPoly = PolynomialFeatures().fit_transform(X_trainScaled) X_testScaled = MinMaxScaler().fit_transform(X_test) X_testScaledandPoly = PolynomialFeatures().fit_transform(X_testScaled) reg = LinearRegression() reg.fit(X_trainScaledandPoly,Y_train) print(reg.score(X_testScaledandPoly,Y_test)) print(reg.intercept_) print(reg.coef_) print(reg.intercept_ == pipeLine.steps[2][1].intercept_) print(reg.coef_ == pipeLine.steps[2][1].coef_) 0.44099256691782807 3.4067909278765605 [ 0. -7.60868833 5.87162697] True [ True True True]
different scores when using scikit-learn pipeline vs. doing it manually
It is happening because the Jenkins pipeline code is not actually running this Groovy code directly it is interpreting it with a parser to apply script security to keep the Jenkins system secure amongst other things. To quote "Pipeline code is written as Groovy but the execution model is radically transformed at compile-time to Continuation Passing Style (CPS)." - see best practiceshttps://jenkins.io/blog/2017/02/01/pipeline-scalability-best-practice/. In short, don't write complex Groovy code in your pipelines - try to use standard steps supplied by the pipeline DSL or plugins. Simple Groovy code in script sections can be useful in some scenarios however. Nowadays I am putting some of my more complex stuff in plugins that supply custom steps.
I've got a piece of code that works perfectly in all of the groovy interpreters I know of, including Jenkins scripting console. Yet it has a weird behavior when it comes to pipeline scripts.def kvs = ['key1': 'value1', 'key2': 'value2'] println kvs println kvs.inject(''){ s,k,v -> s+= "{'$k': '$v' } "}First of all, the map is printed differently:Expected:[key1:value1, key2:value2]Got:{key1=value1, key2=value2}Then, more of a problem, the yielded result differs drastically:Expected:{'key1': 'value1' } {'key2': 'value2' }Got:nullBoth of these results were obtained with the following groovy version:2.4.12. (Though, outside of the pipeline script, I also tried versions2.4.6and2.4.15and always got the expected results)Please note that I am not interested in workarounds. I only wish to understand why the behavior changed from normal groovy to pipeline script.
Why does this groovy code work differently in jenkins pipeline script
Provided the input object is typed correctly, yes:filter Invoke-Method { param( [System.Management.Automation.PSMethod] $Method ) return $Method.Invoke($_) } (65..74 -as [byte[]]) |Invoke-Method -Method ([System.Text.Encoding]::UTF8.GetString)
It would be good to have a PS filter for calling method such as[System.Text.Encoding]::UTF8.GetStringto make possible something like:filter Invoke-Method { ... ?? ... } Invoke-WebRequest $url ` | Select-Object Content ` | Invoke-Method [System.Text.Encoding]::UTF8.GetStringHereis a sample for member call, but my attempts to construct something like this for my case failed by the moment.
Call methods on the Piped Object
You have to encapsulate your commands withscript://... stages { stage('Initialization started'){ steps{ script{ env.BUILD_ID = 'http://Energy-JobSrv2.vm.dom/api/buildnumber'.ToURL().text currentBuild.displayName = "#" + env.BUILD_ID } echo "Job parameters:\n\t- ROOT_FOLDER: ${params.ROOT_FOLDER}\n\t- Build X86: ${params.buildX86}\n\t- Build X64: ${params.buildX64}\n\t- Commit Version changes: ${params.commitVersionChanges}\n\t- Setup Version: ${params.version}.${env.BUILD_NUMBER}\n\t- Setup Configuration: ${params.setupConfiguration}\nCurrent repository: ${workspace}" } } //... } //...
I've tried to accomplish this:https://www.quernus.co.uk/2016/08/12/global-build-numbers-in-jenkins-multibranch-pipeline-builds/in order to have a unique build number accross all the branches.//... stages { stage('Initialization started'){ steps{ env.BUILD_ID = 'http://Energy-JobSrv2.vm.dom/api/buildnumber'.ToURL().text currentBuild.displayName = "#" + env.BUILD_ID echo "Job parameters:\n\t- ROOT_FOLDER: ${params.ROOT_FOLDER}\n\t- Build X86: ${params.buildX86}\n\t- Build X64: ${params.buildX64}\n\t- Commit Version changes: ${params.commitVersionChanges}\n\t- Setup Version: ${params.version}.${env.BUILD_NUMBER}\n\t- Setup Configuration: ${params.setupConfiguration}\nCurrent repository: ${workspace}" } } //... } //...But I think it's not done for Jenkins Descriptive pipeline files, because when I try to run it, I get this:[Bitbucket] Build result notified org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed: WorkflowScript: 17: Expected a step @ line 17, column 5. env.BUILD_ID = 'http://Energy-JobSrv2.vm.dom/api/buildnumber'.ToURL().text ^ WorkflowScript: 18: Expected a step @ line 18, column 5. currentBuild.displayName = "#" + env.BUILD_IDWhat is the equivalent with Jenkins descriptive pipeline files?
How to set the BuildNumber of a jenkins descriptive pipeline from a service?
First, you have some weird caps in your pipeline - width and height are for video here. They probably will be just ignored.. but still.. not sure on others there as well but meh..For the actual question. Just useaudioresampleandaudioconvertelements of Gstreamer to transfer in your desired format.E.g.[..] ! rtpL16depay ! audioresample ! audioconvert ! \ audio/x-raw, rate=8000, format=S16LE ! filesink location=Tornado.raw
I am developing an application where I am using awave filefrom a location at one end of a pipeline andudpsinkat the other end of it.gst-launch-1.0 filesrc location=/path/to/wave/file/Tornado.wav ! wavparse ! audioconvert ! audio/x-raw,channels=1,depth=16,width=16,rate=44100 ! rtpL16pay ! udpsink host=xxx.xxx.xxx.xxx port=5000The Above wave file is having sampling rate = 44100 Hz and single-channel(mono)On the same PC I am using ac++program application tocatch these packetsanddepayloadto a headerless audio file(say Tornado.raw)The pipeline I am creating for this is basicallygst-launch-1.0 udpsrc port=5000 ! "application/x-rtp,media=(string)audio, clock-rate=(int)44100, width=16, height=16, encoding-name=(string)L16, encoding-params=(string)1, channels=(int)1, channel-positions=(int)1, payload=(int)96" ! rtpL16depay ! filesink location=Tornado.rawNow This works fine. I get the headerless data and when I play it using the Audacity It plays great!I am trying to resample this audio file while it is in pipeline from 44100 Hz to 8000 HzSimply changing theclock-rate=(int)44100toclock-rate=(int)8000is not helping (also absurd logically) I am looking for how to get the headerless file at the pipeline output with 8000 Hz sampling.Also the data that I am getting now is Big-endian, but I want Little-endian as output. how do I set that in the pipeline?You might relate this to one ofmy earlier question.
Resample and depayload audio rtp using gstreamer
They are the same. It is possible that you may want one or the other from a maintainability standpoint, but the outcome of a test set prediction will be identical.EditNote that this is only the case because theStandardScaleris idempotent. It is strange that you fit the pipeline on the data that has already been scaled...
How would one correctly standardize the data without using pipeline? I am just wanting to make sure my code is correct and there is no data leakage.So if I standardize the entire dataset once, right at the beginning of my project, and then go on to try different CV tests with different ML algorithms, will that be the same as creating an Sklearn Pipeline and performing the same standardization in conjunction with each ML algorithm?y = df['y'] X = df.drop(columns=['y', 'Date']) scaler = preprocessing.StandardScaler().fit(X) X_transformed = scaler.transform(X) clf1 = DecisionTreeClassifier() clf1.fit(X_transformed, y) clf2 = SVC() clf2.fit(X_transformed, y) ####Is this the same as the below code?#### pipeline1 = [] pipeline1.append(('standardize', StandardScaler())) pipeline1.append(('clf1', DecisionTreeClassifier())) pipeline1.fit(X_transformed,y) pipeline2 = [] pipeline2.append(('standardize', StandardScaler())) pipeline2.append(('clf2', DecisionTreeClassifier())) pipeline2.fit(X_transformed,y)Why would anybody choose the latter other than personal preference?
Obtaining Same Result as Sklearn Pipeline without Using It
If I understand your question correctly then this might be what you're after.def transformPipeline(fs: Seq[MyType => MyType])(init: MyType): MyType = fs.foldLeft(init)((v, f) => f(v))Tested like so:type MyType = Int transformPipeline(Seq(_+1,_*2,_/3))(17) //res0: MyType = 12
In order to pipeline a variety of data transformation functions I want to iterate through a sequence of functions and apply each to the initial input. For a single input it would be something like this:def transformPipeline(f: MyType => MyType)(val: MyType): MyType = {...}How can I define this function such that instead of accepting a singlef: MyType => MyTypeit would accept something likeSeq(f: MyType => MyType)e.g.def transformPipeline(f: Seq[MyType => MyType])(val: MyType): MyType = {...}
Scala Passing Sequence of Functions as Argument Type
Thetoolssection is expecting you to provide theNameto a specific item in yourGlobal Tool Configuration(https://JENKINS_HOME/configureTools, orManage Jenkins->Configure Global Tools).On that page, when you click on theJDK installations...button, it will give you a list of all of the JDKs that are configured. Your pipeline step needs to specify one of the items byNameinstead of byVersion.Based on the error hint, I suspect there is one calledOracle JDK 8; you just need to verify that it is pointing to the appropriate version and update your pipeline section to read:tools { jdk "Oracle JDK 8" }Reference: TheJenkins Pipeline Syntaxpage'stoolsexampleincludes the following note (emphasis theirs):The tool name must be pre-configured in Jenkins underManage Jenkins β†’ Global Tool Configuration.
I have local copy of Git java code.I want a pipeline job script in Jenkins that can compile and build the code locally and show success or failure .I need a script which use only Java JDK not the maven (as the source code is developed using eclipse java project).The os i am using is windows.pipeline { agent any stages { stage ("build") { tools { jdk "jdk-1.8.0_181" } steps { sh 'java -version' } } }}I am getting below errororg.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed: WorkflowScript: 7: Tool type "jdk" does not have an install of "jdk-1.8.0_181" configured - did you mean "Oracle JDK 8"? @ line 7, column 24. jdk 'jdk-1.8.0_181' ^1 error
How to compile and build Java eclipse code through Jenkins pipeline?
As mentioned in theCreating datasetsBigQuery's documentation, datasets that begin with an underscore are hidden from the navigation pane; however, you can still query tables and views in these datasets even though they are not visible in the web UI.Based on this, I have made some tests and found that an available workaround is to create theData Studioconnection by using theCustom Query; in this way, you can keep the dataset hidden from the Web UI and create the statements to access it based on your project needs.
I am working on a data pipeline which eventually results in a table within a dataset in BigQuery. There are two conditions this dataset needs to meet: (1) it has to be able to connect to datastudio and (2) this dataset needs to be hidden in the WebUI of BigQUery. BigQuery documentation suggests the use of underscore in the naming of the dataset to hide it from the BigQuery WebUI. This works and I can control it through CLI. However, this also results in it being hidden from DataStudio which makes it not possible to connect to this dataset from DataStudio. I would like to know, if possible, how this (creating a dataset hidden from the BigQuery WebUI with connection possibility to DataStudio) could be achieved without having to create a new project.
How to create a hidden dataset in the BigQuery WebUI whilst keeping datastudio connection possible?
You can move all the powers work inside a single custom Transformer. We can change yourIrisDataManupulationto handle the list of powers inside it:class IrisDataManupulation(BaseEstimator, TransformerMixin): def __init__(self, powers=[2]): self.powers = powers def transform(self, X): powered_arrays = [] for power in self.powers: powered_arrays.append(np.power(X, power)) return np.hstack(powered_arrays)Then you can just use this new transformer instead of FeatureUnion:fu = IrisDataManupulation(powers=[2,3])Note: If you want to generate polynomial features from your original features, I would recommend tosee PolynomialFeatures, which can generate the powers you want in addition to other interactions between features.
I am learning Pipelines and FeatureUnions in scikit-learn and thus wondering whether it is possible to repeated apply 'make_union' on a class?Consider the following code:import numpy as np import pandas as pd from sklearn.base import BaseEstimator, TransformerMixin from sklearn.pipeline import Pipeline, FeatureUnion from sklearn.linear_model import LogisticRegression import sklearn.datasets as d class IrisDataManupulation(BaseEstimator, TransformerMixin): """ Raise the matrix of feature in power """ def __init__(self, power=2): self.power = power def fit(self, X, y=None): return self def transform(self, X): return np.power(X, self.power) iris_data = d.load_iris() X, y = iris_data.data, iris_data.target # feature union: fu = FeatureUnion(transformer_list=[('squared', IrisDataManupulation(power=2)), ('third', IrisDataManupulation(power=3))])QUESTIONAny neat way to create the FeatureUnion without repeating the same transformer, but rather passing a list of parameters?For example:fu_new = FeatureUnion(transformer_list=[('raise_power', IrisDataManupulation(), param_grid = {'raise_power__power':[2,3]})
repeated FeatureUnion in scikit-learn
apache_beam.transforms.combiners.ToListcan work for you if the list fits on memory.beam.combiners.ToList()is Python version.
I want a list values from Pcollection.PCollection<List<Integer>> lst = bqT2.apply(ParDo.of(new UserId())); // line 1 List myList = lst.getAll(); // line 2but there is no "getAll()" functionI found something similarList<String> dummylist = Arrays.asList(dummy); DoFnTester<String,String> fnTester = DoFnTester.of(new AAA(mapview)); fnTester.setSideInputInGlobalWindow(mapview, csvlist); //dummylines.apply(ParDo.of(fnTester)); List<String> results = fnTester.processBatch(dummylist);but I didn't found any way to use "DoFnTester" function for getting list items.Is there any way to list from PCollection?Just to elaborate more I have two PCollections.PCollection p1 = pipeline.apply("", BigQueryIO.read().fromQuery("SELECT * from myTable where userid in " + lst + ));Note: lst is from line 1Not sure if google dataflow doesn't support simple usecases.
Getting List From PCollections
You are right, this is not exactly well-documented by scikit-learn. (Zero reference to it in the class docstring.)If you use a pipeline as the estimator in a grid search, you need to use a special syntax when specifying the parameter grid. Specifically, you need to use thestep namefollowed by a double underscore, followed by the parameter name as you would pass it to the estimator. I.e.'<named_step>__<parameter>': valueIn your case:parameters = {'knc__n_neighbors': k_vals}should do the trick.Herekncis a named step in your pipeline. There is an attribute that shows these steps as a dictionary:svp.named_steps {'knc': KNeighborsClassifier(algorithm='auto', leaf_size=30, metric='minkowski', metric_params=None, n_jobs=1, n_neighbors=5, p=2, weights='uniform'), 'ss': StandardScaler(copy=True, with_mean=True, with_std=True)}And as your traceback alludes to:svp.get_params().keys() dict_keys(['memory', 'steps', 'ss', 'knc', 'ss__copy', 'ss__with_mean', 'ss__with_std', 'knc__algorithm', 'knc__leaf_size', 'knc__metric', 'knc__metric_params', 'knc__n_jobs', 'knc__n_neighbors', 'knc__p', 'knc__weights'])Some official references to this:The user guide onpipelinesSample pipeline for text feature extraction and evaluation
I am trying to perform scaling using StandardScaler and define a KNeighborsClassifier(Create pipeline of scaler and estimator)Finally, I want to create a Grid Search cross validator for the above where param_grid will be a dictionary containing n_neighbors as hyperparameter and k_vals as values.def kNearest(k_vals): skf = StratifiedKFold(n_splits=5, random_state=23) svp = Pipeline([('ss', StandardScaler()), ('knc', neighbors.KNeighborsClassifier())]) parameters = {'n_neighbors': k_vals} clf = GridSearchCV(estimator=svp, param_grid=parameters, cv=skf) return clfBut doing this will give me an error saying thatInvalid parameter n_neighbors for estimator Pipeline. Check the list of available parameters with `estimator.get_params().keys()`.I've read the documentation, but still don't quite get what the error indicates and how to fix it.
Error when using scikit-learn to use pipelines
You can use version3.7.0-SNAPSHOTof the library.Seehttps://github.com/mongodb/mongo-java-driver/pull/434/files#diff-63c7b578add7f32066a07ad92de3fea2R275
The newest version (3.6) of the Mongo server introduced a nice feature in the$lookupstage of aggregations. Now, the operator takes a pipeline as an argument to run on the collection to join (as explainedhere), which, for example, would allow to filter the documents that will be joined before joining. Is there any way to make use of it in the java driver? I've looked through the driver'sreferenceandAPIdocumentation and didn't find anything. Is there something I'm missing or is it not implemented yet?
Mongo java driver lookup with inner pipeline
You are missingForEach-Object(alias%).The following code should work:Get-ADUser -Filter {whenCreated -ge $now} -SearchBase "OU=staff,OU=SMUC_Users,DC=stmarys,DC=ac,DC=ie" ` | Where-Object { $_.Enabled -eq 'True' } ` | %{Get-ADGroup -LDAPFilter ("(member:1.2.840.113556.1.4.1941:={0})" -f $_.DistinguishedName)} ` | Select-Object -ExpandProperty NameIf you want to output both user and group information you can expand the code like this:Get-ADUser -Filter {whenCreated -ge $now} -SearchBase "OU=staff,OU=SMUC_Users,DC=stmarys,DC=ac,DC=ie" ` | Where-Object { $_.Enabled -eq 'True' } ` | %{$group = Get-ADGroup -LDAPFilter ("(member:1.2.840.113556.1.4.1941:={0})" -f $_.DistinguishedName);Write-Output $_.UserPrincipalName $group.Name}
I'm trying to stitch together two lines of PowerShell, but I just can't figure the syntax. There isa postthat sounds like it might be what I need, but it isn't using-LDAPFilter.To generate a list of AD users created in the last 100 days, I use$now = ((Get-Date).AddDays(-100)).Date $users = Get-ADUser -Filter {whenCreated -ge $now} -Searchbase "OU=staff,OU=SMUC_Users,DC=stmarys,DC=ac,DC=ie" | Where-Object { $_.Enabled -eq 'True' }And this code from"How to get ALL AD user groups (recursively) with Powershell or other tools?"does the next step, which is to find all the groups that a user is a member of:$username = 'd.trump' $dn = (Get-ADUser $username).DistinguishedName Get-ADGroup -LDAPFilter ("(member:1.2.840.113556.1.4.1941:={0})" -f $dn) | select -Expand Namebut I can't pipe the output of the first into the second to get an overall list.Get-ADUser -Filter {whenCreated -ge $now} -Searchbase "OU=staff,OU=SMUC_Users,DC=stmarys,DC=ac,DC=ie" | Where-Object { $_.Enabled -eq 'True' } | Select-Object DistinguishedName | Get-ADGroup -LDAPFilter ("(member:1.2.840.113556.1.4.1941:={0})" -f $_) | select -expand NameThe error message is:Get-ADGroup : The search filter cannot be recognizedI thought the second code snippet extracted the distingushed name and supplied it to the filter, and that is what I have tried to do in the pipeline.
Piping output of get-ADUser to Get-ADGroup with an LDAP filter
I stumbled upon your issue, which seems to be due to the source having issue (no PTS) and gstreamer not working around them (this relates togstreamer bug #659489).If you don't have B-frames in your stream, you can tryBaseParse.set_pts_interpolation(h264parse, true)and the issue may go away as the PTS is computed.PS: Tiny DVR using this workaroundhere
I created the pipelinegst_parse_launch("rtspsrc location=rtsp://192.168.0.77:554/user=admin_password_=tlJwpbo6_channel=1_stream=0.sdp?real_stream ! queue !rtph264depay ! h264parse ! splitmuxsink muxer=\"mp4mux name=muxer\" max-size-bytes=20000000 location=/storage/emulated/0/DVR/CameraX/the_file_%d.mp4",NULL);and it works fine with Gstreamer version 1.9.1. I want to use newer versions for some other reasons but with above versions 1.10.X and 1.11.X pipeline stops after working for some indeterminate time between seconds to minutes. Logcat output is here:gstqtmux.c:3391:gst_qt_mux_add_buffer: error: Buffer has no PTS.W/GStreamer+basesrc: 0:01:06.383504349 0xb9380000 gstbasesrc.c:2950:gst_base_src_loop: error: Internal data stream error.W/GStreamer+basesrc: 0:01:06.383623672 0xb9380000 gstbasesrc.c:2950:gst_base_src_loop: error: streaming stopped, reason error (-5)I tried for different camera models. I removed splitmuxsink and tried with mp4mux but result did not change. I changed "presentation-time" property of mp4mux but nothing changes.
Gstreamer for Android Buffer has no PTS
You can't use-Filterwith-Identity(identity is the parameter you're binding to when you pipe). You'll have to filter after the fact:Get-Content oldusers.txt | Get-ADUser -Properties Name,Manager,LastLogon | Where-Object -FilterScript { $_.Enabled } | Select-Object Name,Manager,@{n='LastLogon';e={[DateTime]::FromFileTime($_.LastLogon)}}
I'm using a text file generated from AD of disabled users names, e.g. jdoakes. I am using the following script to obtain the last time the user logged in. When ran, it is only returning non-disabled user. Any way to make this work?Get-Content oldusers.txt | Get-ADUser -Filter {Enabled -eq $true} -Properties Name,Manager,LastLogon | Select-Object Name, Manager, @{n='LastLogon';e={[DateTime]::FromFileTime($_.LastLogon)}}It doesn't return any of the user names in the text file.
Get-ADUser is not using import text file
Read about whatpipeliningis/does and then you'll understand why this won't work - until youexecutethe pipeline, none of the commands in it will be sent to the server. That makes your conditional statements miss the purpose you had in mind.
I'm trying to clear my concept of pipelining implemented in redis, using a python client. Take the following example:my_server = redis.Redis(connection_pool=POOL) for obj_id in list_of_obj_ids: hash_name="n:"+str(obj_id) sorted_set = "s:"+str(obj_id) if my_server.exists(hash_name): my_server.hset(hash_name,'val',0) if my_server.zcard(sorted_set): my_server.zadd(sorted_set, hash_name, time.time())I.e. I'm updating multiple hashes via iterating through a for loop. How can I accomplish this kind of bulk update via pipelining? From what I've read, the following is what comes to my mind:my_server = redis.Redis(connection_pool=POOL) p = my_server.pipeline() for obj_id in list_of_obj_ids: hash_name="n:"+str(obj_id) sorted_set="s:"+str(obj_id) if p.exists(hash_name): p.hset(hash_name,'val',0) if p.zcard(sorted_set): p.zadd(sorted_set, hash_name, time.time()) p.execute()Is this correct?
using pipeline for bulk processing in redis (python example)
One possible method is to switch the triggers database to none, and then switch it back when done. That seems to work for us.
In my application, documents updated or loaded into MarkLogic are sent through the content processing framework based on document collection. This triggers an extensive workload including versioning and querying external systems.Is there a way to temporarily disable the CPF? I sometimes need to make minor changes to all documents (~300,000) such as adding a new document property. In these cases, I would prefer the pipelines not to run at all so my system isn't held up for days.In the past, I've temporarily changed the domain collection names and commented out large sections in the pipeline XML files. Neither of these solutions is ideal since I have dozens of collections and pipeline XML files.
Marklogic: Disabling content processing (CPF) for large updates
Your linq expression returns an annonymous object at the end. So you cannot use string as the return type of your method.You can use dynamic as the typeprivate dynamic ParseDateOfBirth(string info) { return info.Split(';') .Select(n => n.Split(',')) .Select(n => new { name = n[0].Trim(), datetime = DateTime.ParseExact(n[1].Trim(), "d/M/yyyy", CultureInfo.InvariantCulture) }); }Or even better, create a DTO to represent the data you are returning and use thatpublic class UserInfo { public string Name { set; get; } public DateTime Datetime { set; get; } } private IEnumerable<UserInfo> ParseDateOfBirth(string info) { return info.Split(';').Select(n => n.Split(',')) .Select(n => new UserInfo { Name = n[0].Trim(), Datetime = DateTime.ParseExact(n[1].Trim(), "d/M/yyyy", CultureInfo.InvariantCulture) }); }Now when you call this method with the inputvar s = "sean oneill, 26/06/1985; matt sheridan, 22/09/1984; Jenny Hutching, 21/03/1972"; var result = ParseDateOfBirth(s);the variableresultwill be a collection of 3UserInfoobjects.
Is it possible to return a linq query like below with a return type or is it readonly like a foreach statement?Thanks!var s = "sean oneill, 26/06/1985; matt sheridan, 22/09/1984; Jenny Hutching, 21/03/1972"; var s9 = ParseDateOfBirth(s); private string ParseDateOfBirth(string info) { return info.Split(';').Select(n => n.Split(',')).Select(n => new { name = n[0].Trim(), datetime = DateTime.ParseExact(n[1], "d/M/yyyy", CultureInfo.InvariantCulture)}); }
LINQ return a value IEnumerable
To anyone who is having trouble with this, the name of the html file you want to display has to be index.html. It works when this is done. You also have to be in the root directory.
I am trying to run a simple html page on the bluemix server. However I get 403 Forbidden error. I am using a manifest.yml to deploy. This is what manifest.yml contains:--- applications: - buildpack: https://github.com/cloudfoundry/staticfile-buildpack.git host: helloworld-html-${random-word} name: lab6 memory: 64M stack: cflinuxfs2I am not sure what to do to fix this. This is how the file structure looks like. Also, why does my Javascript show the error 'document' is undefined. It works fine outside bluemix(eg in a browser).
403 Forbidden nginx when trying to run simple static html
Expand theNameorFullNameproperty of the file objects first:Get-ChildItem -Recurse | Select-Object -Expand FullName | Select-String [search_string]To get the full name of the matches expand theLineproperty of the resultingMatchInfoobjects:Get-ChildItem -Recurse | Select-Object -Expand FullName | Select-String [search_string] | Select-Object -Expand Lineor use aWhere-Objectfilter instead ofSelect-Object:Get-ChildItem -Recurse | Select-Object -Expand FullName | Where-Object { $_ -match [search_string] }
I am trying to do the equivalent of the linux commandfind . | grep [search_string]in Powershell.When I callGet-ChildItem -recurse | Select-String [search_string], it matches thecontentsof files, instead of just the file names that I'm trying to match.How do I get it to only look at the file names?
Piping Get-Childitem into SelectString selects file contents instead of file names
Interesting question.The bash manpage states:If a command is followed by a & and job control is not active, the default standard input for the command is the empty file /dev/null. Otherwise, the invoked command inherits the file descriptors of the calling shell as modified by redirections.If you callnc 127.0.0.1 1234 < /dev/nulloutside a shell script (with job control) it will result in the same.You can change your bash script like this to make it work:#!/bin/bash nc 127.0.0.1 1234 < /dev/stdin & sleep 20
I want to run nc via & and then manually feed data in stdin from /proc filesystem whenever I want. so the problem is:if I runnc 127.0.0.1 1234 &program runs in background and I can write in stdin whatever I want. but, if I create test.sh and add#!/bin/bash nc 127.0.0.1 1234 & sleep 20it connects to 1234 and terminates immediately (doesn't even wait for 20 seconds). why? I was suspecting it gets it's stdin written from somewhere.
bash running nc with ampersand terminates program
I don't know Simplescalar very well, but I've seen this done in other architectural simulators.The outer loop of the pipeline is supposed to represent one cycle of the processor. Imagine this is the first cycle and--for simplicity--the front end has a width of one. What would happen if each stage were to be executed in the order offetchtocommmit?cycles 1 - fetch: instruction 1 (place it in fetch/decode latch) - decode: instruction 1 (place it in decode/rename latch) - rename: instruction 1 (place it in rename/dispach latch) - dispatch: instruction 1 (place it in issue queue) - issue: instruction 1 etc...You haven't simulated anything useful here as this isn't a pipeline. What happens when the loop executes each stage in the order ofcommittofetch?cycle 1 - issue: noop - dispatch: noop - rename: noop - decode: noop - fetch: instruction 1 (place it in fetch/decode latch) cycle 2 - issue: noop - dispatch: noop - rename: noop - decode: instruction 1 (place it in decode/rename latch) - fetch: instruction 2 (place it in fetch/decode latch)It's not a very complicated idea, but it helps to simplify the simulator.
So I'm studying architectural simulators (mostly, simplescalar) to figure out what really happens in the innards of a microprocessor. One fascinating thing i noticed was that, the entire pipeline was written backwards! That is, in a purely sequential while loop, the writeback stage comes before the issue stage, which comes before the decode stage and so on.What's the point of this? Let's say the output of a hypothetical fetch() stage is stored in a shared buffer ('latch') that is accessed by the input of the decode() stage. Since its a purely sequential while loop, I don't see a problem where this latch/buffer will be overwritten. However, answers to questions likethis: claim that simulating the pipeline in reverse somehow avoids this 'problem'? Some insight/guidance in the right direction will be much appreciated!
Pipelining in Architectural Simulators
You can set infinity as the end of the pipeline:"end": "9999-99-99T12:00:00Zand in Your activity, add"scheduler": { "frequency": "Hour", "interval": 3 },to set scheduler. I do not see posibility to manually stop pipeline(except restart).
I followed the tutorials of MS and created a pipeline to move data from the on premise SQL Tablet to Azure Blob and then there's another pipeline to move the blob data to azure sql table. From the document provided, I need to specify the active period (start time and end time) for the pipeline and everything run well.The question is what can I do if I want the pipeline to be activated every 3 hours, until I manually stop the operation. Currently I need to change the start time and end time in the json script and publish it again everyday and I think there should be another way to do it.I'm new to azure, any help/comment will be appreciate, thank you.p.s. I can't do the transactional replication since my on-premise SQL is lied on SQL2008.
How to Scheduling a Pipeline in Azure Data Factory