Response
stringlengths
8
2k
Instruction
stringlengths
18
2k
Prompt
stringlengths
14
160
So, this thread getting resurrected has led me to "answer" my own question, since I discovered long ago thatArrayLists have been deprecated andCollections.Generic.List<T>is the preferred solution, as pointed out by @santiago-squarzon today.So, for anyone wondering$log = [System.Collections.Generic.List[String]]::new()or the older New-Object way$log = New-Object System.Collections.Generic.List[String]to instantiate the collection, then happily$log.Add('Message')with no pipeline pollution to worry about. You can also add multiple items at once with$log.AddRange()With the range being another list, or an array if you cast to List first. And you can insert a message with something like$log.Insert(0, 'Message')So yeah, lots of flexibility and no pollution. Winning.ShareFolloweditedJan 31, 2023 at 11:09answeredNov 15, 2022 at 20:26GordonGordon6,54577 gold badges3939 silver badges102102 bronze badges1However this adds pollution for.Remove(),[System.Collections.Generic.List[String]]outputs "True" if successfully removed. While[System.Collections.ArrayList]outputs the index only for.Add(). So it's a stalemate, and we still haven't found a solution that works for all operations.–adamencySep 18, 2023 at 14:31Add a comment|
I am using an Array List to build a sequence of log items to later log. Works a treat, but the Add method emits the current index to the pipeline. I can address this by sending it to $null, like this$strings.Add('junk') > $nullbut I wonder if there is some mechanism to globally change the behavior of the Add method. Right now I have literally hundreds of> $nullrepetitions, which is just ugly. Especially when I forget one.I really would like to see some sort of global variable that suppresses all automatic pipelining. When writing a large script I want to intentionally send to the pipeline, as unexpected automatic send to pipeline is a VERY large fraction of my total bugs, and the hardest to find.
Suppress Array List Add method pipeline output
I think what you want is:clf = make_pipeline(MinMaxScaler(), LogisticRegression()) from sklearn.model_selection import cross_val_predict from sklearn.metrics import confusion_matrix y_pred = cross_val_predict(clf, X_train, y_train, cv=3) conf_mat = confusion_matrix(y, y_pred)From3.1.1.2of scikit-learn's online documentation:The function cross_val_predict has a similar interface to cross_val_score, but returns, for each element in the input, the prediction that was obtained for that element when it was in the test set. Only cross-validation strategies that assign all elements to a test set exactly once can be used (otherwise, an exception is raised).Note that the result of this computation may be slightly different from those obtained using cross_val_score as the elements are grouped in different ways.ShareFolloweditedJun 20, 2020 at 9:12CommunityBot111 silver badgeansweredApr 1, 2018 at 14:33Brian O'DonnellBrian O'Donnell1,8561919 silver badges3030 bronze badges1Thanks for the answer @ Brian O'Donnell–Ashok Kumar JayaramanOct 16, 2018 at 12:51Add a comment|
I'm running a pipeline with logistic regression through cross validation using scikit-learn. I'm getting the scores from each fold in the code below. How do I get the confusion matrix?clf = make_pipeline(MinMaxScaler(), LogisticRegression()) scores = cross_val_score(clf, X_train, y_train, cv=3)
getting the confusion matrix for each cross validation fold
You are correct, you cannot adjust the your target within a sklearnPipeline. That doesn't mean that you cannot do a gridsearch, but it does mean that you may have to go about it in a bit more of a manual fashion. I would recommend writing a function do your transformations and filtering onyand then manually loop through a tuning grid created viaParameterGrid. If this doesn't make sense to you edit your post with the code you have for further assistance.ShareFollowansweredJan 11, 2016 at 22:36DavidDavid9,31533 gold badges4141 silver badges4040 bronze badges2Yeah, that's what I meant. I can't just dump my pipeline into a GridSearchCV, which I find the most convenient way of doing CV. I'm fairly certain I can get it to work manually. Thanks–Matt M.Jan 12, 2016 at 10:00Is it worth raising this as a feature request? Seems like it would be a common requirement (for problems with more than one output variable)–BillApr 29, 2020 at 19:08Add a comment|
I have a set of N data points X = {x1, ..., xn} and a set of N target values / classes Y = {y1, ..., yn}.The feature vector for a given yiis constructed taking into account a "window" (for lack of a better term) of data points, e.g. I might want to stack "the last 4 data points", i.e. xi-4, xi-3, xi-2, xi-1for prediction of yi.Obviously for a window size of 4 such a feature vector cannot be constructed for the first three target values and I would like to simply drop them. Likewise for the last data point xn.This would not be a problem, except I want this to take place as part of a sklearn pipeline. So far I have successfully written a few custom transformers for other tasks, but those cannot (as far as I know) change the Y matrix.Is there a way to do this, that I am unaware of or am I stuck doing this as preprocessing outside of the pipeline? (Which means, I would not be able to use GridsearchCV to find the optimal window size and shift.)I have tried searching for this, but all I came up with wasthis question, which deals with removing samples from the X matrix. The accepted answer there makes me think, what I want to do is not supported in scikit-learn, but I wanted to make sure.
scikit-learn custom transformer / pipeline that changes X and Y
That'snotnode.js related but redis.A Request/Response server can be implemented so that it is able to process new requests even if the client didn't already read the old responses. This way it is possible to send multiple commands to the server without waiting for the replies at all, and finally read the replies in a single step.This is called pipelining, and is a technique widely in use since many decades. For instance many POP3 protocol implementations already supported this feature, dramatically speeding up the process of downloading new emails from the server.Redis supports pipelining since the very early days, so whatever version you are running, you can use pipelining with Redis. This is an example using the raw netcat utility:http://redis.io/topics/pipeliningShareFolloweditedJun 20, 2020 at 9:12CommunityBot111 silver badgeansweredOct 24, 2012 at 13:50PrinzhornPrinzhorn22.3k77 gold badges6161 silver badges6868 bronze badgesAdd a comment|
node_redisstatesThe performance of node_redis improves dramatically with pipelining, which happens automatically in most normal programs.I'm writing the program myself, so what's meant here? Does that mean simplynon-blocking?
What's pipelining in node_redis?
According to the CircleCI docs, workflows (specifically) does not accept theconditionkey:Note: When using logic statements at the workflow level, do not include the condition: key (the condition key is only needed for job level logic statements).See herelogic-statement-examples(scroll to the bottom of this section to see the note)ShareFollowansweredNov 18, 2020 at 19:42AviadAviad3,50433 gold badges1515 silver badges2121 bronze badgesAdd a comment|
I have followed the guide described inConditional steps in jobs and conditional workflowsand written the below code for my CircleCI pipeline.version: 2.1 workflows: version: 2.1 workflowone: when: condition: false jobs: - samplejob: workflowtwo: when: condition: true jobs: - jobone jobs: samplejob: docker: - image: buildpack-deps:stable steps: - run: name: Sample Job in WF 1 command: | echo "This job is in workflowone and the workflow should not run" jobone: docker: - image: buildpack-deps:stable steps: - run: name: Sample Job in WF 2 command: | echo "This job is in workflowtwo and the workflow should run"When I run the above code the output is not what is expected. First workflow should not run because the condition is false. Both worflows start running when the pipeline in triggered. Can anyone point out the missing piece here?
How do we conditionally run a CircleCI workflow?
Thanks to MrFlick's comment, the secondlinksays themscorefontspackage is needed for text support in R.Addingmscorefontsto thecondaenvironment fixes the issue# minimal.yaml channels: - bioconda - conda-forge - defaults dependencies: - r-base =3.6 - r-ggplot2 - mscorefontsShareFollowansweredFeb 26, 2020 at 1:09Tomas BencomoTomas Bencomo35911 silver badge99 bronze badgesAdd a comment|
I'm building a pipeline withsnakemakeand usingcondaandsingularityenvironments to ensure reproducibility. I run into an error where the text on my plots is replaced by rectanglesAfter experimenting with the pipeline on Linux and Mac systems and disabling the singularity container, it appears the issue stems from a missing font library as the text is drawn normally when I only run the pipeline using onlyconda(--use-conda) on my Mac.The singularity container is built fromthis minicondadocker image that uses Debian GNU/Linux. I've managed to create a minimal example pipeline where the text doesn't get drawn.# Snakefile singularity: "docker://continuumio/miniconda3" rule all: input: "mtcars-plot.png" rule plot_mtcars: output: "mtcars-plot.png" conda: "minimal.yaml" script: "mtcars-test.R"# mtcars-test.R library(ggplot2) png("mtcars-plot.png") ggplot(mtcars, aes(factor(cyl), mpg)) + geom_boxplot() dev.off()# minimal.yaml channels: - bioconda - conda-forge - defaults dependencies: - r-base =3.6 - r-ggplot2To draw the broken plot, run the pipelinesnakemake --use-conda --use-singularityWhat packages/libraries could I be missing to correctly draw text with R on Debian GNU/Linux?
R Draws Plots with Rectangles Instead of Text
After talking with a colleague (thanks Lucas Still), he had the idea and pointed it out in Gitlab's documentation to check variables for what user is pushing. This was a great idea since I already had a bot gitlab user that does thegit pushso all I had to do is have anexceptand check if the user is that bot account:dev build: stage: build except: variables: - $GITLAB_USER_LOGIN == "my-bot" only: - develop@username/reponame script: - npm version patch - echo "Do build here before pushing the version bump" - git push git@my-repo-url:$CI_PROJECT_PATH.git HEAD:$CI_COMMIT_REF_NAME --follow-tagsSo the only thing that is important here is to change"my-bot"to be the username of the bot account. Could use$GITLAB_USER_IDor even$GITLAB_USER_EMAILalso but the user name is more descriptive to other people that come across the yml file.ShareFolloweditedJun 21, 2018 at 14:56answeredJun 21, 2018 at 12:15Mitchell SimoensMitchell Simoens2,53622 gold badges2020 silver badges2929 bronze badges1Thanks for the answer, I got into the same problem recently. Though, it is not clear to me how you are able to make thisgit pushfrom your Bot GitLab user (I'm quite new to GitLab). As far as I can get with the snippet above, I cannot understand how you configure this. Could you please add more details?–EM90Apr 21, 2020 at 7:16Add a comment|
I have a project where I have 4 environments (dev, test, staging and prod) and we have branches for each (develop, test, staging master respectively). We usenpm versionto bump version inpackage.jsonbut also add a git tag. After that we run the build and on success of that, we push the commit and tag created by thenpm versioncommand. So in my pipeline job, I have this (simplified):dev build: stage: build only: - develop@username/reponame script: - npm version patch -m "[ci skip] %s" - git add -A - echo "Do build here before pushing the version bump" - git push git@my-repo-url:$CI_PROJECT_PATH.git HEAD:develop --follow-tagsNotice with thenpm version, I also specify a message for the git commit so I can add the“[ci skip]”which is how we stop the infinite loop but then we have pipeline runs listed as skipped under the status column. Not the worst thing in the world but wanted to see if there is a better way to do this sort of thing? Have a version git commit and tag pushed to the repo without triggering another pipeline run.
Prevent infinite loop with Gitlab pipeline and git pushing
One way to make this work for OSX & Linux is to use namedpipes to signal to batch jobs that they can continue. Unfortunately it requires splitting up "Task *.1" & "Task *.2", but it's the best I can come up with for the moment.#!/bin/bash function cleanup { rm -f .task.a.1.done .task.b.1.done } trap cleanup EXIT cleanup mkfifo .task.a.1.done mkfifo .task.b.1.done { echo "Task A.1" echo -n "" > .task.a.1.done } & { echo "Task B.1" echo -n "" > .task.b.1.done } & { echo "Task X" } { cat < .task.a.1.done echo "Task A.2" } & { cat < .task.b.1.done echo "Task B.2" } & waitShareFollowansweredFeb 22, 2018 at 18:44munromunro36111 gold badge33 silver badges66 bronze badgesAdd a comment|
I'm trying to model a build concurrent pipeline in a single Bash script. I know I can use other tools, but at this point I'm doing it out of understanding Bash.Scheduling jobs parallel is easy, and waiting for them all at the end is easy. But I want to make it run faster by trigger Task A.2 immediately after Task A.1 & Task X. To make it even hard on myself, the code in Task A.1 & Task A.2 is related & sequential, so it would be nice if I could keep the code sequential as well.#!/usr/bin/bash { echo "Task X" } & DEPENDENCY=$! { echo "Task A.1" wait "${DEPENDENCY}" echo "Task A.2" } & { echo "Task B.1" wait "${DEPENDENCY}" echo "Task B.2" } & waitThis is ideally what I want, but it doesn't work because child processes can't wait for each other--which makes sense--but I'm hoping I can make this work in a cross platform way.I actually have this working, but but I wasn't able to keep the code for part *.1 and *.2Also it would be nice if this can work for OSX & Linux. I'm hoping a Bash expert can chime in and show a succinct way of expressing this in Bash.
Bash complex pipeline dependencies
As far as I remember (can't test now), you can read a whole file into memory with the namespace notation:${c:file1.txt} = ${c:file1.txt} -replace "a" "o"ShareFollowansweredJun 24, 2009 at 12:11guillermoooguillermooo7,9851515 gold badges5656 silver badges5858 bronze badges2Very cool! Never seen this syntax. Note that for the specific example above, you need to use parenthesis around the RHS of the assignment: ${c:foo} = ( ${c:foo} | format-xml ). Or you can use a modified pipeline: ${c:foo} | format-xml | out-file foo–Richard BergJun 24, 2009 at 13:511This is the same syntax used to access variables. There is a special interface on Providers to support this, but not all providers implement it. So that is really how you access variables in the variable provider.–JasonMArcherJun 27, 2009 at 23:24Add a comment|
I'd like to be able to type quick, simple commands that manipulate files in-place. For example:# prettify an XML file format-xml foo | out-file fooThis won't work because the pipeline is designed to be "greedy." The downstream cmdlet acquires a Write lock to the file as soon as the upstream cmdlet processes the first line of input, which stalls the upstream cmdlet from reading the rest of the file.There are many possible workarounds: write to temporary files, separate operations into multiple pipelines (storing intermediate results in variables), or similar. But I figure this is a really common task for which someone has developed a quick, shell-friendly shortcut.I tried this:function Buffer-Object { [CmdletBinding()] param ( [parameter(Mandatory=$True, ValueFromPipeline=$True)] [psobject] $InputObject ) begin { $buf = new-list psobject } process { $buf.Add($InputObject) } end { $buf } } format-xml foo | buffer-object | out-file fooIt works ok in some situations. Mapped to a short alias and rolled into a common distribution like PSCX, it would be "good enough" for quick interactive tasks. Unfortunately it appears that some cmdlets (including out-file) grab the lock in their Begin{} method rather than in Process{}, so it does not solve this particular example.Other ideas?
Powershell: how do you read & write I/O within one pipeline?
You can only delete individual pipelines one by one in the UI.To do bulk deletions, you can use thepipelines APIto programmatically list and delete pipelines.In Python (with thepython-gitlablibrary) it might look something like this:import gitlab project_id = 1234 gl = gitlab.Gitlab('https://gitlab.example.com', private_token='My token') project = gl.projects.get(project_id) for pipeline in project.pipelines.list(iterator=True): pipeline.delete()ShareFolloweditedSep 13, 2023 at 16:45answeredMar 17, 2022 at 13:54sytechsytech34.8k77 gold badges4949 silver badges9999 bronze badges41I don't get why GitLab is not creating a cron schedule option to remove old pipelines.–FreshMikeFeb 20, 2023 at 18:13How do you delete 1 pipeline in the history using the UI?–tzgSep 13, 2023 at 15:32@tzg when you are viewing the pipeline (like where you can see the job graph), there is a delete button in the top-right corner, but IIRC the pipeline must be in a complete state (e.g., failed, skipped/cancelled, success) for the button to show up in the UI. Deleting pipelines requiresownerrole permissions.–sytechSep 13, 2023 at 16:42Ok, permissions must be my issue because I'm not seeing the button. Thanks!–tzgSep 14, 2023 at 16:21Add a comment|
Is there an easy way to remove all the previous pipelines runned in Gitlab?I would like to clean up this section, but didn't find any options through the interface.Thanks a lot.
Clean up history in Gitlab Pipeline
From Gitlab UI you can not.Actually, fromGitLab 15.3(August 2022), you can:Rebase a merge request from the UI without triggering a pipelineIn large and busy monorepos with semi-linear branching, you might need to rebase your merge requests frequently. To save resources, you might not want to run a pipeline each time you rebase. You could skip the pipeline whilerebasing with the API, or by usingGit push optionsor[ci skip]in a commit message, but not when rebasing from the UI in a merge request.Now we have an option to skip the pipeline when rebasing from the UI, so you have better control over when a pipeline runs for your merge requests. Thanks toKevfor the contribution!SeeDocumentationandIssue.To rebase a merge request’s branch without triggering a CI/CD pipeline, selectRebase without pipelinefrom the merge request reports section.This option is available when fast-forward merge is not possible but a conflict-free rebase is possible.Rebasing without a CI/CD pipeline saves resources in projects with a semi-linear workflow that requires frequent rebaseShareFollowansweredSep 1, 2022 at 22:27VonCVonC1.3m539539 gold badges4.6k4.6k silver badges5.4k5.4k bronze badgesAdd a comment|
While reviewing a merge request on gitlab.com, there are times when I have to rebase before completing the merge.After pressing “Rebase” on gitlab, I have a specific pipeline step fails because it can’t verify the user’s gpg: signature.How can I skip (or allow) this step when I rebase changes online? Is there a GiLab user id for this online process?
GitLab: How to skip or allow pipeline step after rebasing online
Why not use a border for eachliexcept the last instead? E.g:Demo Fiddleul li:not(:last-child) { border-right:1px solid grey; margin-right:20px; padding-right:20px; }Otherwise, you will likely need to add positioning to your:afterpsuedo elements or change the display toinline-block- though its hard to say without being able to replicate your issue with the provided code.ShareFollowansweredAug 11, 2014 at 15:41SW4SW470.5k2020 gold badges135135 silver badges138138 bronze badgesAdd a comment|
This is my first time posting a question here.I am working with twitter bootstrap on a website design. I have to create the navigation menu such that the items listed there will be separated with pipeline. I have found another similar question here -Add a pipe separator after items in an unordered list unless that item is the last on a lineand I used that, but the items are not separated as they are supposed to.What I need to get:http://prntscr.com/4br2lbAfter implementing the code from the post I found here I get this:http://prntscr.com/4br2yjHere is my code:HTML:<div class="navbar navbar-default navbar-static-top header-menu"> <div class="collapse navbar-collapse navHeaderMenuCollapse"> <ul class="nav navbar-nav navbar-middle navbar-text" style="float:none;margin: 0 auto; display: table;table-layout: fixed;"> <li><a href="#">HOME</a></li> <li><a href="#">AUTO</a></li> <li><a href="#">LIFE</a></li> <li><a href="#">HEALTH</a></li> <li><a href="#">BUSINESS</a></li> <li><a href="#">MORE...</a></li> </ul> </div> </div>CSS:ul li { float: left; } ul li:after { content: "|"; padding: 0 .5em; }Thank you in advance!
How to separate link items with pipeline
I tested with this regex and it works :only: - /^milestone-.*$/In your comment, you wrote/^mileston-.*$/instead of/^milestone-.*$/(theeis missing at the end of milestone)ShareFollowansweredJan 24, 2019 at 8:59Nicolas PepinsterNicolas Pepinster5,74722 gold badges3333 silver badges5050 bronze badges0Add a comment|
I want to trigger a pipeline everytime the current milestone branch changes it works fine with hardcoded milistone number the problem is that we increase the milestone number, every 2 weeks and gitlab runner doesn't parse.gitlab-ci.ymlwildcards so things like that not workingjob: only: - milestone-*i also tried regex as suggested BY Makoto Emura here at the commentsjava: only: - /^mileston-.*$/for now i use it this way and update my.gitlab-ci.ymlafter creating a new milestonejob: only: - milestone-10I try to look for an environment variable for target branch but didn't find anyDoes anyone know a solution?
GitlabCI run pipeline on specific branch using wild card
You could do this (setup a curses application in apipeline) by using thenewtermfunction to directly open the terminal for managing the screen, while reservingstdoutfor the pipeline. Thedialogprogram does this.But the Python curses interface does not havenewterm(it only hasinitscr, which usesstdoutfor the screen...), and while there are probably workarounds (in Python, juggling the I/O streams), it hasn't been addressed in any of the answers on this forum.ShareFolloweditedDec 10, 2018 at 1:52answeredDec 10, 2018 at 1:27Thomas DickeyThomas Dickey52.6k77 gold badges7575 silver badges106106 bronze badges11Hey Thomas, thank you for the answer. I think we're on the right track. I believe such juggling you mention is implemented inpercol. The relvant code ishere. The standard descriptors are taken from theosmodule and then there is some juggling with os.dup() and os.dup2(). But I don't quite grasp the whole thing. Could you be so kind as to update your answer with a small example for stdout only?–user3160153Dec 10, 2018 at 16:58Add a comment|
I am writing a python application that is intended to be used interactively inside unix pipelines. The application should fire up a curses based terminal UI, and based on user interaction, write to the standard output only right before exiting.Typical usage would be that of a typical pipeline:foo_command | my_application | sink_appThe problem I am having is that python curses library sends all sorts of things to stdout while the app is running. Furthermore,sink_appstarts executing whilemy_applicationis running.How to I prevent curses from polluting stdout?How do I buffer the output and control when I want to flush it?Is it possible to control whensink_appstarts executing and when it stops accepting input?From what I gather, I need to save a reference to the stdout file descriptor so I can later write to it. And pass another fd (which one?) to ncurses. Supposedly through newterm(), but this isn't available on python curses binding.
How to I make python curses application pipeline friendly?
MJPEG is a codec which in simple terms means that there are is a series of jpeg images. These jpegs have to be stored in a container if you want to view them as a video. MP4 is a common container to store them in.So you can mux the jpegenc output back to a mp4mux and store it in a file. Any decent media player should be able to play it back.ShareFollowansweredOct 3, 2012 at 22:41av501av5016,64322 gold badges2323 silver badges3434 bronze badges21It looks likemp4muxdoesn't allow to record MJPEG in MP4 file container. [stackoverflow.com/questions/46276014/…–AhresseSep 20, 2017 at 8:54Yes it seems avi does support it though–ClimaxAug 17, 2018 at 11:17Add a comment|
I have problem with savingMJPEGstream to file. When I streamMJPEGusing such pipeline:gst-launch filesrc location=thirdmovie640x360.mp4 ! decodebin2 name=dec \ ! queue ! ffmpegcolorspace ! jpegenc ! queue ! avimux name=mux \ ! udpsink host=192.168.0.2 port=5000I am able to play this stream on my second machine using such pipeline:gst-launch -v udpsrc port=5000 ! jpegdec ! autovideosinkHowever, how can I save suchMJPEGstream to file (without transcoding!) which will be able to be played in some media player? Could you recommend some pipeline?I found such pipeline to save output stream as matroska file:gst-launch udpsrc port=5000 ! multipartdemux ! jpegparse ! jpegdec \ ! ffmpegcolorspace ! matroskamux ! filesink location=output.mkvHow to change it to save mp4 file? Such pipeline:gst-launch udpsrc port=5000 ! multipartdemux ! jpegparse ! jpegdec \ ! ffmpegcolorspace ! mp4mux ! filesink location=output.mp4does not work. Could you help me save it as mp4 contener (or avi contener) without transcoding MJPEG video.
GStreamer - MJPEG stream to file
Theteecommand is similar; it writes its input to standard output as well as one or more files. If that file is a process substitution, you get the same effect, I believe.echo 1 2 3 | tr ' ' '\n' | sort | tee >( **code** ) | uniqThe code in the process substitution would read from its standard input, which should be the same thing that the call touniqends up seeing.ShareFollowansweredSep 22, 2012 at 17:17chepnerchepner512k7373 gold badges557557 silver badges708708 bronze badges2+1 for 'process substitution'. There goes the need for most of the /tmp files I've ever used.–Clayton StanleySep 22, 2012 at 22:15And in case you just want to display tostderr,... | tee /dev/stderr | ....–VicJul 30, 2016 at 0:18Add a comment|
Is there an idiomatic analog to Ruby'sObject#tapfor Unix command pipelines?Use case: within a pipeline I want to execute a command for its side effects but return the input implicitly so as to not break the chaining of the pipeline. For example:echo { 1, 2, 3 } | tr ' ' '\n' | sort | tap 'xargs echo' | # arbitrary code, but implicitly return the input uniqI'm coming from Ruby, where I would do this:[ 1, 2, 3 ]. sort. tap { |x| puts x }. uniq
Idiomatic Analog to Ruby's `Object#tap` for Unix command Pipelines?
I don't think it is possible to pipe into another window. Your START command within your parent window is receiving the pipe input. The START command ignores the input. You want the MORE command in the new window to receive the input.I believe the only way to achieve your goal is to use a temporary file:@echo off set "file=%temp%\%~nx0.temp" findstr "^" >"%file%" start "" cmd /c more +5 ^<"%file%" ^& del "%file%" ^& pauseShareFollowansweredDec 29, 2015 at 14:43dbenhamdbenham129k2929 gold badges257257 silver badges396396 bronze badges21Why do you use ^< before "%file%"?–DevPlayerJun 23, 2021 at 20:32He's usingfindstras acatcommand, which Windows lacks.findstrapplies a regular expression to its input lines and writes those lines that match to its output.^matches the beginning of a line, so all lines get copied, like cat.–GeorgeJun 18, 2022 at 0:13Add a comment|
See this script saved in a file calledfoo.cmd.@echo off more +5This script may now be used in this manner.dir C:\Windows | fooIt displays the output beginning from the 6th line, one screen at a time (i.e. as a pager). The current command prompt remains blocked until I quitmore.Now I modify the script as follows, so that the more output is displayed in a separate window.@echo off start "" more +5Now if I run the following command, a new window is launched fine, but no output is displayed in it.dir C:\Windows | fooIt appears that the output of thedircommand that I have piped intofoo.cmdis not being received by thestartcommand.What can I do to ensure that any data piped into the standard input of thestartcommand is passed on to the program being invoked bystart(which ismorein this case)?
How to make Windows START command accept standard input via pipe and pass it on to the command it invokes?
This gives me my desired end result :$( Command-GetA; Command-GetB )| Group Result | Command-GetCThus, if "GetC" is a concatenating cmdlet, entire "GetC" cmdlet code runs only once for all my input files (my output file is created anew every time I run "concat")Though all the above answers here are correct, my requirement was a bit different :)EDIT:This is perfect!,@(Command-GetA;Command-GetB)|Command-GetCThanks @PetSerAlShareFolloweditedOct 19, 2015 at 11:27answeredOct 19, 2015 at 6:33SeekerSeeker29755 silver badges2020 bronze badges2So, your actual requirementwas abitdifferent, that output should be passed as single item, not by elements. But, if you want to pass array as single item, there is no other possible ways then first create that array and store outputs in it (the phrase in your question:Other than ways like storing the outputs in an array and then passing it to pipeline?, was really misleading about your intentions). If I am right about your real requirement, then that is what you was needed:,@(Command-GetA;Command-GetB)|Command-GetC.–user4003407Oct 19, 2015 at 7:35Yes you are very correct! By that line I meant that I didn't want a separate array to be declared in one line of cmd. Then use the array to pass to GetC which would become another line of cmd. I basically wanted to get it done in one line. Sorry should have phrased my question properly. Thanks a lot!–SeekerOct 19, 2015 at 11:25Add a comment|
I have a requirement to combine outputs of multiple Power Shell scripts to be given as pipeline output using CmdLets in .NET.For example :Command-GetA -combine Command-GetB | Command-GetDSo I want to send outputs ofGetAandGetBtoGetDvia pipeline. Is there a compact way to do this using powershell scripts or .NET cmdlets? Other than ways like storing the outputs in an array and then passing it to pipeline?Another complex example could be:Command-GetA -combine ( Command-GetB | Command-GetD ) | Command-GetFThis should combineGetAandGetB|GetDto send as pipeline input toGetFEDIT:It would be good if I could do something like this - @( GetA ; GetB) -ConvertToASingleList | GetCSoOutputOfGetAandOutputOfGetBshouldn't get called separately onGetC. Both should, as a combined array or list object, be passed toGetC
combine multiple powershell cmdlet outputs
you need to create your own class that inherits BaseEstimator, TransformerMixin of sklearn.then specify your function in fit/transform/fit_transform / predict/predict_prob etc functions of your own class.Put customized functions in Sklearn pipelineShareFollowansweredJun 2, 2020 at 7:22Arpit SisodiaArpit Sisodia62188 silver badges1818 bronze badgesAdd a comment|
How to create sklearn pipeline with custom functions? I have a two functions, one for cleaning data and second for building model.def preprocess(df): ………………. # clean data return df_clean def model(df_clean): ………………… #split data train and test and build randomForest Model return modelSo I use FunctionTransformer and created pipelinefrom sklearn.pipeline import Pipeline, make_pipeline from sklearn.preprocessing import FunctionTransformer pipe = Pipeline([("preprocess", FunctionTransformer(preprocess)),("model",FunctionTransformer(model))]) pred = pipe.predict_proba(new_test_data) print(pred)I know above is wrong, not sure how to work on, in the pipe I need to pass the training data first then, I have to pass new_test_data?
Creating pipeline in sklearn with custom functions?
FunctionTransformeris used to "lift" a function to a transformation which I think can help with some data cleaning steps. Imagine you have a mostly numeric array and you want to transform it with a Transformer that that will error out if it gets anan(likeNormalize). You might end up with something likedf.fillna(0, inplace=True) ... cross_val_score(pipeline, ...)but maybe you thatfillnais only required in one transformation so instead of having thefillnalike above, you havenormalize = make_pipeline( FunctionTransformer(np.nan_to_num, validate=False), Normalize() )which ends up normalizing it as you want. Then you can use that snippet in more places without littering your code with.fillna(0)In your example, you're passing in['numeric1']which is alistand not an extractor like the similarly typeddf[['numeric1']]. What you may want instead is more likeFunctionTransformer(operator.itemgetter(columns))but that still wont work because the object that is ultimately passed into the FunctionTransformer will be annp.arrayand not aDataFrame.In order to do operations on particular columns of aDataFrame, you may want to use a library likesklearn-pandaswhich allows you to define particular transformers by column.ShareFollowansweredSep 10, 2016 at 19:31Alex RiinaAlex Riina80177 silver badges1111 bronze badgesAdd a comment|
Writing my first pipeline for sk-learn I stumbled upon some issues when only a subset of columns is put into a pipeline:mydf = pd.DataFrame({'classLabel':[0,0,0,1,1,0,0,0], 'categorical':[7,8,9,5,7,5,6,4], 'numeric1':[7,8,9,5,7,5,6,4], 'numeric2':[7,8,9,5,7,5,6,"N.A"]}) columnsNumber = ['numeric1'] XoneColumn = X[columnsNumber]I use thefunctionTransformerlike:def extractSpecificColumn(X, columns): return X[columns] pipeline = Pipeline([ ('features', FeatureUnion([ ('continuous', Pipeline([ ('numeric', FunctionTransformer(columnsNumber)), ('scale', StandardScaler()) ])) ], n_jobs=1)), ('estimator', RandomForestClassifier(n_estimators=50, criterion='entropy', n_jobs=-1)) ]) cv.cross_val_score(pipeline, XoneColumn, y, cv=folds, scoring=kappaScore)This results in:TypeError: 'list' object is not callablewhen the function transformer is enabled.edit:If I instantiate aColumnExtractorlike below no error is returned. But isn't thefunctionTransformermeant just for simple cases like this one and should just work?class ColumnExtractor(TransformerMixin): def __init__(self, columns): self.columns = columns def transform(self, X, *_): return X[self.columns] def fit(self, *_): return self
sklearn function transformer in pipeline
To pickle the TfidfVectorizer, you could use:joblib.dump(pipeline.steps[0][1].transformer_list[0][1], dump_path)or:joblib.dump(pipeline.get_params()['features__tfidf'], dump_path)To load the dumped object, you can use:pipeline.steps[0][1].transformer_list[0][1] = joblib.load(dump_path)Unfortunately you can't useset_params, the inverse ofget_params, to insert the estimator by name. You will be able to if the changes inPR#1769: enable setting pipeline components as parametersare ever merged!ShareFolloweditedMar 29, 2016 at 23:41answeredMar 29, 2016 at 1:15joelnjoeln3,5932626 silver badges3131 bronze badges2How do I load it in from within the Pipeline?–Ivan BilanMar 29, 2016 at 14:131It is almost unbelievable that such a useful feature does not exist.–Ivan BilanMar 30, 2016 at 10:57Add a comment|
I am usingPipelinefrom sklearn to classify text.In this examplePipeline, I have aTfidfVectorizerand some custom features wrapped withFeatureUnionand a classifier as thePipelinesteps, I then fit the training data and do the prediction:from sklearn.pipeline import FeatureUnion, Pipeline from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.svm import LinearSVC X = ['I am a sentence', 'an example'] Y = [1, 2] X_dev = ['another sentence'] # classifier LinearSVC1 = LinearSVC(tol=1e-4, C = 0.10000000000000001) pipeline = Pipeline([ ('features', FeatureUnion([ ('tfidf', TfidfVectorizer(ngram_range=(1, 3), max_features= 4000)), ('custom_features', CustomFeatures())])), ('clf', LinearSVC1), ]) pipeline.fit(X, Y) y_pred = pipeline.predict(X_dev) # etc.Here I need to pickle theTfidfVectorizerstep and leave thecustom_featuresunpickled, since I still do experiments with them. The idea is to make the pipeline faster by pickling the tfidf step.I know I can pickle the wholePipelinewithjoblib.dump, but how do I pickle individual steps?
How to pickle individual steps in sklearn's Pipeline?
In the event that your pipeline can not be found for the selected domain, please go trough and verify all of the following:Double check Pipeline-Node namingPipeline URLs are generated by their name and your desired entry node, in this scenario, I would expect a file namedHello.xmlin you cartridge's pipeline directory, and a start node namedStart, would be accessed via{instanceURL}/on/demandware.store/Sites-mySite-Site/Hello-StartTry and force upload of your cartridgesOccasionally the files on the server will not be updated correctly when a save is made; to force an update, right click your project, clickDemandware>Upload CartridgesCheck your Cartridge PathIf you are using a shared instance, or your instance is re-provisioned, you may need to check your cartridge path to be sure your custom cartridge(s) are still there.Check your Code VersionsOccasionally you may increment / change your code version - if you do, make sure that the path you select in Studio is the one that you have selected in Business Manager.Tech SupportShould you still have issues following the four steps above, please file a support ticket and the tech-support team will be able to provide you with more assistance.ShareFolloweditedSep 22, 2016 at 3:20answeredDec 18, 2015 at 4:07Matt ClarkMatt Clark28.1k2020 gold badges7171 silver badges124124 bronze badgesAdd a comment|
I already made pipeline. Which is working fine. Suddenly it give error like2015-12-18 02:39:08.091 GMT] ERROR system.core ISH-CORE-2368 Sites-SiteGenesis-Site core Storefront [uuid] [request-id]-0-00 [timestamp]"Error executing pipeline: Hello com.demandware.beehive.core.capi.pipeline.PipelineExecutionException:Pipeline not found (Hello) for current domain (Sites-SiteGenesis-Site)"Does anybody know how to solve this?
Demandware - Pipeline not found for current domain
from sklearn.utils import estimator_html_repr with open("pipeline.html", "w") as f: f.write(estimator_html_repr(pipeline))ShareFollowansweredOct 16, 2022 at 16:53Benjamin BretonBenjamin Breton1,43911 gold badge1515 silver badges4242 bronze badges1This does not answer the question since the OP clearly asks how to create an image without having to take a screenshot. I might add: could it even be saved as a vector graphic?–Jeyes UnterwegsFeb 13 at 11:17Add a comment|
I want to save a pipeline displayed with theset_config(display="diagram")command, see below:Is there a better way to do it than by taking a screenshot?
Save sklearn pipeline diagram
function trigger_job() { job_name=$1 curl --user ${CIRCLE_API_TOKEN}: \ --data build_parameters[CIRCLE_JOB]=$job_name \ --data revision=$CIRCLE_SHA1 \ https://circleci.com/api/v1.1/project/github/$CIRCLE_PROJECT_USERNAME/$CIRCLE_PROJECT_REPONAME/tree/$CIRCLE_BRANCHI use this function for trigger job and find different by git-diff like thisgit diff-tree --name-only $(git log -n 2 --pretty=format:"%H") | grep projectShareFolloweditedMar 19, 2019 at 21:30diralik6,82133 gold badges3232 silver badges5353 bronze badgesansweredJun 29, 2017 at 2:28MrNonzMrNonz8111 silver badge66 bronze badgesAdd a comment|
I'm using circleCI version 2 and myconfig.ymllike this:version: 2 jobs: a: steps:... b: steps:... workflows: version: 2 main_pipeline: jobs: - a - bI want only to build when a change happens in the directory.job afor folder ajob bfor folder bwhenfolder achanges, build onlyjob a.
circleCI - How to run job when has change in specific directory
Using a ThreadLocal seems like a good idea if that is enough for you. Be aware that this will only work well for upstream events as downstream events may be fired by any thread and so the ThreadLocal stuff may not work out that good.ShareFollowansweredMay 21, 2012 at 6:46Norman MaurerNorman Maurer23.4k22 gold badges3434 silver badges3131 bronze badges1Hmm, good point. AFAIK, for now this is ok, as my server is open/request/response/close, so the write thread will always be a worker thread. Writes would happen either immediately, in the worker thread that read the response, or if the write didn't succeed then later using OP_WRITE, also in a worker thread. If my requests start to take to long and I introduce a thread pool to avoid blocking the worker threads, I'd have to change ThreadLocal to a synchronized pool. Does that all sound correct to you?–NateSMay 21, 2012 at 10:47Add a comment|
On the server, ChannelPipelineFactory#getPipeline() is called for every new connection. My server's ChannelHandlers have allocation for serialization buffers and other objects that are not thread safe. How can I avoid allocating them for each connection? My application has a high volume of short-lived connections and this is hurting performance. I don't store per connection state in the handlers, I really only need to allocate them once per thread that will use the pipeline because the objects in the handlers are not thread safe.Possibly I could use a pipeline with a single handler. When any event is received, I would obtain a reference to my actual handler from a ThreadLocal. This way I only allocate an actual handler once per thread serving connections. It does mean a ThreadLocal look up for every event though.Are there other solutions that might be better?I have the impression that the Netty pipeline looks great in relatively simple example code and is quite neat when it fits the problem well (such as the HTTP handling), but that it isn't very flexible for many other scenarios.Hereis someone elses' thoughts along those lines. It isn't any sort of disaster since using a single handler to roll your own "pipeline" seems perfectly viable, but I wonder if I'm doing it wrong?
using the Netty pipeline efficiently
In general, no.If you look at the interface forsklearnstages, the methods areof the form:fit(X, y, other_stuff) predict(X)That is, they work on the entire dataset, and can't do incremental learning on streams (or chunked streams) of data.Moreover, fundamentally, some of the algorithms are not amenable to this. Consider for example your stage("SCALE", Normalizer()),Presumably, this normalizes using mean and/or variance. Without seeing the entire dataset, how can it know these things? It must therefore wait for the entire input before operating, and hence can't be run in parallel with the stages after it. Most (if not nearly all) stages are like that.However, in some cases, you still can use multicores withsklearn.Some stages have ann_jobsparameter. Stages like this use sequentially relative to other stages, but can parallelize the work within.In some cases you can roll your own (approximate) parallel versions of other stages. E.g., given any regressor stage, you can wrap it in a stage that randomly chunks your data intonparts, learns the parts in parallel, and outputs a regressor that is the average of all the regressors. YMMV.ShareFolloweditedMay 4, 2017 at 18:18answeredMay 4, 2017 at 14:44Ami TavoryAmi Tavory75.4k1111 gold badges145145 silver badges189189 bronze badgesAdd a comment|
I have a set of Pipelines and want to have multi-threaded architecture. My typical Pipeline is shown below:huber_pipe = Pipeline([ ("DATA_CLEANER", DataCleaner()), ("DATA_ENCODING", Encoder(encoder_name='code')), ("SCALE", Normalizer()), ("FEATURE_SELECTION", huber_feature_selector), ("MODELLING", huber_model) ])Is it possible to run the steps of the pipeline in different threads or cores?
Pararelization of sklearn Pipeline
In real code, not all instruction results will be written to the register file, instead they will pass through forwarding paths. If you mix dependent and independent instructions in your code, you may see higher IPC.The A57 optimisation guide states that late-forwarding occurs for chains of multiply-accumulate instructions, so maybe something like this will dual-issue..loop vmla.s16 q0,q0,q1 vmla.s16 q0,q0,q2 vmla.s16 q0,q0,q3 vmla.s16 q4,q4,q1 vmla.s16 q4,q4,q2 vmla.s16 q4,q4,q3 ...etcShareFollowansweredDec 17, 2015 at 15:42Charles BaylisCharles Baylis88177 silver badges88 bronze badgesAdd a comment|
The Cortex-A57 Optimization Guide states that most integer instructions operating on 128-bit vector data can be dual-issued (Page 24, integer basic F0/F1, logical F0/F1, execution throughput 2).However with our internal (synthetic) benchmarks, throughput seems to be limited to exactly 1 128-bit neon integer instruction, even when there is plenty of instruction parallelism available (the benchmark was written with the intention to test whether 128-bit neon instructions can be dual-issued, so this is something we took care). When mixing 50% 128-bit with 50% 64-bit instructions, we were able to achieve 1.25 instructions per clock (only neon integer arith, no loads/stores).Are there special measures which have to be taken in order to get dual-issue throughput when using 128-bit ASIMD/Neon instructions?Thx, Clemens
Can Cortex-A57 dual-issue 128-bit neon instructions?
what is the protocol to fetch an URL outside of the crawling process?When you create aRequestgiving it an url, it doesn't matter where you've taken the url to download from. You can extract it from the page, or build somehow else.how do I build items from several sources in an elegant way?UseRequest.metaShareFolloweditedMay 23, 2017 at 11:51CommunityBot111 silver badgeansweredAug 5, 2012 at 14:33warvariucwarvariuc58.1k4242 gold badges175175 silver badges229229 bronze badges2One last question, assuming I have multiple spiders, is it possible that all items be fed into the same pipeline, and then use thetechnique described in the manualto merge "duplicates"?–JérémieAug 5, 2012 at 14:36I don't understand - need more info. I suggest creating a new question, if it's not related to this one.–warvariucAug 5, 2012 at 14:39Add a comment|
After years of reluctantly coding scrapers as a mish-mash of regexp and BeautifulSoup, etc. I foundScrapy, which I pretty much count as this year's Christmas present to myself! It is natural to use, and it seems to have been built to make practically everything elegant and reusable.But I am in a situation I am not sure how to tackle: my spider crawls and scrapes a listing page A, from which I generate a set of items. But for each item, I need to fetch a distinct complementary link (constructed from some of the scraped information, but not explicitly a link on the page which Scrapy could follow) to obtain additional information.My question is in two parts: what is the protocol to fetch an URL outside of the crawling process? how do I build items from several sources in an elegant way?This has partially been asked (and answered) in aprevious questionon StackOverflow. But I am more interested in what the philosophy of Scrapy is supposed to be in this usage case---surely not an unforeseen possibility? I wonder if this is one of the things the Pipelines are destined to be used for (adding information from a secondary source deduced from the primary information is an instance "post-processing"), but what is the best way to do it, to not completely mess up Scrapy's efficient asynchronous organization?
Scrapy: how to build an item gathering information from several URLs?
I did it by exporting "error" or "success" file and then I can use it to decide what should I do. for example you can useafter_script:build: stage: build script: - COMMAND 2>&1 || { echo "Some errors appears, will continue" ; touch $CI_PROJECT_DIR/error ; } after_script: - if [ -e error ]; then echo "Seems all was done But some issues appears" ; <DO SOMETHING> fi ;Also you can pass this file asartifactto other jobs and usetrigger.includeto dynamically create jobs if needed, e.g. asdescribed in gitlab documentation.ShareFollowansweredOct 19, 2023 at 14:09GAS85GAS851322 bronze badgesAdd a comment|
I use such Gitlab CI Pipeline:Build -> Deploy -> Test ---> Rollback if Build fails |--> Rollback if Deploy or Test failsI realizedRollback if Build failsin this way and it works perfectly: runs only if Build fails. Skips if Deploy or Tests fail.rollback-on-build: when: on_failure needs: [ "Build" ]And triedRollback if Deploy or Test fails:rollback-on-finish: when: on_failure needs: [ "Deploy", "Test" ]And it works only if Tests fails! It skipped if Deploy fails!How to create job running only if fails one job from array?
Gitlab CI: run job if any of specific jobs failed
There are a few options depending on your use case:create a waiting task: in this case, you can write a root task for your flow that waits for the external dependency / condition to be met, and then returns. As long as the other tasks depend on this one, they won't run until this task completes.use the GraphQL API: both Prefect Server and Cloud have a fully featured GraphQL API for performing many common actions with flows and runs. In this case, you can callcreate_flow_runwhenever your external condition is met (possibly withParametervalues describing the condition) to create an ad-hoc run of your flow. For more discussion of this pattern, check outthis stackoverflow questionShareFollowansweredSep 28, 2020 at 0:22chriswhitechriswhite1,3801010 silver badges2121 bronze badgesAdd a comment|
I have a prefect flow that I want to run if and when a specific file appears. With something like Luigi you would create an ExternalTask that outputs that file and then impose a dependence on it. What is the standard pattern for this in Prefect?
Prefect how to wait for external dependency
There is no suchFor loopexpression in Azure YAML pipeline. And actually loop through dynamic template parameters in Azure devops YAML by specify n in the runtime is also not available.Within a template expression, you have access to the parameters context that contains the values of parameters passed in. Additionally, you have access to the variables context that contains all the variables specified in the YAML file plus the system variables.Importantly, it doesn't haveruntime variables such as those stored on the pipeline or given when you start a run. Template expansion happens very early in the run, so those variables aren't available.https://learn.microsoft.com/en-us/azure/devops/pipelines/process/templates?view=azure-devopsShareFollowansweredJun 17, 2020 at 9:46PatrickLu-MSFTPatrickLu-MSFT50.3k55 gold badges3838 silver badges6565 bronze badges21Even im aware of this. just wanted to understand if there is a way. Thanks much–Sajiv SriraamJul 6, 2020 at 9:03you could use a matrix for this, also, not sure, why do you thinkfor eachloop won't work here, it should–4c74356b41Jan 2, 2022 at 7:40Add a comment|
I am aware of how to use for each loop and did POC and it is working fine. But I have a request where I need to use For loop instead of For each.Business case:Need to create task dynamically based on user input.Reason: Some teams uses Multiple maven task in their projects. I have a centralised Template which will create task based on user input for the no. of maven they need in their pipeline.Example${{for i=1; i<= n; i++}} -task: maven@5 pompath: ${pomxmlpath}When n = 5 it has to create 5 maven task in the azure pipeline.
Need to use For loop not for each loop in Azure Pipeline Template yaml
They've put up a blog post explaining how to do this:https://about.gitlab.com/blog/2020/05/07/how-gitlab-automates-releases/They've created a tool (gitlab-releaser) to help with this task. Basically you create a new step, where you use a docker image that provides this tool, and then call the tool with the proper parameters.release_upload: image: registry.gitlab.com/gitlab-org/release-cli:v0.1.0 script: - gitlab-releaser create --name="My Release" --description="My Release description"ShareFollowansweredAug 12, 2020 at 22:27xabitrigoxabitrigo1,3611111 silver badges2525 bronze badgesAdd a comment|
With the release of Gitlab 11.7 in January 2019, we get the new key featurePublish releases for your projects. I want precisely what the screenshot on that page shows and I want to be able to download compiled binaries using thereleases API.I can do it manually. Of course, instructions for the manual approach can be foundhere on stack overflow. The problem I need help with is doing it as part of a CI/CD pipeline, which is not covered by the answers one can find easily.The release notes containa link to the documentation, which states:we recommend doing this as one of the last steps in your CI/CD release pipeline.From this I gather it's possible. However, the only approach I can imagine is using the GitLab API just as I do, when I create releases manually. When one wants to access the GitLab API one has essentially three options for authentication, according tothe fine manual: OAUTH2 tokens, personal access tokens and session cookies. Consequently I would need a method for having either of these available in my CI/CD pipeline, with sufficient privileges. Solutions for this problem are an ongoing discussion with lots of contributions, but virtually no tangible progress in recent years.So, how does one create releases as one of the last steps in one's CI/CD release pipeline?Storing my personal access key with API access in a CI/CD variable or even a file in the repo is not an option for obvious reasons.
Create releases from within a GitLab runner/pipeline
I do not know the exact reason for the slowdown in run(), but when I use the code obove and insert a little sleep (500ms) at the end of the loop in main() then the slowdown of run() is gone. So the system seems to need some "recover time" until it is able to create new threads.ShareFollowansweredJun 8, 2011 at 11:57ConstantinConstantin5555 bronze badgesAdd a comment|
I have created an application which uses the pipeline pattern to do some processing. However, I noticed that when the pipeline is run multiple times in a row it tends to get slower and slower.This is also the case when no actual processing is done in the pipeline stages - so I am curious if maybe my pipeline implementation has a problem.This is a simple test program which repoduces the effect:#include <iostream> #include <boost/thread.hpp> class Pipeline { void processStage(int i) { return; } public: void run() { boost::thread_group threads; for (int i=0; i< 8; ++i) { threads.add_thread(new boost::thread(&Pipeline::processStage, this, i)); } threads.join_all(); } }; int main() { Pipeline pipeline; int n=2000; for (int i=0;i<n; ++i) { pipeline.run(); if (((i+1)*100)/n > (i*100)/n) std::cout << "\r" << ((i+1)*100)/n << " %"; } }In my understanding the threads are created in run() and at the end of run() they are terminated. So the state of the program at the beginning of the outer loop in the main program should always be the same...But what I observe is an increating slowdown when processing this loop.I know that it would be more efficient to keep the pipeline threads alive thoughout the whole program - but I need to know if there is a problem with my pipeline implementation.Thanks! Constantin
Problem with pipeline implementation
sshpassis veryrarely usedingitlab-ci.ymlfiles.Much more frequentis the use of an ssh-agent (if your private key is passphrase-protected),as seen here.before_script: - 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )' - 'which rsync || ( apt-get update -y && apt-get install rsync -y )' - eval $(ssh-agent -s) - ssh-add <(echo "$SSH_PRIVATE_KEY_DEV") - ssh-add <(echo "$SSH_PRIVATE_KEY_PROD") - mkdir -p ~/.ssh - '[[ -f /.dockerenv ]] && echo "$SSH_HOSTKEYS" > ~/.ssh/known_hosts' dev: script: - rsync --exclude=.gitlab-ci.yml --exclude=.git -avx -e ssh `pwd`/[email protected]:/var/www/html/it_dev/it_monitor_app/auth/Check if this approach is more reliable, using amasked variablefor your private key.ShareFollowansweredJan 22, 2020 at 17:03VonCVonC1.3m539539 gold badges4.6k4.6k silver badges5.4k5.4k bronze badgesAdd a comment|
I used the pipeline in a another project and here it worked.But now I have a problem (I'm using exactly the same settings)staging_upload: stage: staging only: refs: - develop - schedules script: - sshpass -e rsync -avz --progress --exclude='.git' --exclude='.gitlab-ci.yml' . $SSH_USERNAME@j$HOST:/home/xy/html/project/staging/Now I get this error:rsync error: received SIGINT, SIGTERM, or SIGHUP (code 20) at rsync.c(644) [sender=3.1.2]Has anyone a clue what is going wrong here?
Why does my gitlab pipeline fail with rsync error
As per a training I attended - this should be the best practice :cv = CrossValidator(estimator=lr,..) pipelineModel = Pipeline(stages=[idx,assembler,cv]) cv_model= pipelineModel.fit(train)This way your pipeline would fit only once and not with each recurring run with the param_grid which makes it run faster. Hope this helps!ShareFollowansweredJun 29, 2020 at 2:37CuriousKKCuriousKK3511 silver badge77 bronze badgesAdd a comment|
Cross Validation outside from pipeline.val naivebayes val indexer val pipeLine = new Pipeline().setStages(Array(indexer, naiveBayes)) val paramGrid = new ParamGridBuilder() .addGrid(naiveBayes.smoothing, Array(1.0, 0.1, 0.3, 0.5)) .build() val crossValidator = new CrossValidator().setEstimator(pipeLine) .setEvaluator(new MulticlassClassificationEvaluator) .setNumFolds(2).setEstimatorParamMaps(paramGrid) val crossValidatorModel = crossValidator.fit(trainData) val predictions = crossValidatorModel.transform(testData)Cross Validation inside pipelineval naivebayes val indexer // param grid for multiple parameter val paramGrid = new ParamGridBuilder() .addGrid(naiveBayes.smoothing, Array(0.35, 0.1, 0.2, 0.3, 0.5)) .build() // validator for naive bayes val crossValidator = new CrossValidator().setEstimator(naiveBayes) .setEvaluator(new MulticlassClassificationEvaluator) .setNumFolds(2).setEstimatorParamMaps(paramGrid) // pipeline to execute compound transformation val pipeLine = new Pipeline().setStages(Array(indexer, crossValidator)) // pipeline model val pipeLineModel = pipeLine.fit(trainData) // transform data val predictions = pipeLineModel.transform(testData)So i want to know which way is better and its pro & cons.For both functions, i am getting same result and accuracy. Even second approach is little bit faster than first.
cross validation with pipe line in spark
From comment:There is only a single execution context unless threads are introduced (the alternative approach with a single-thread is just the non-lazy result building at each step). With threads, each stage is just a FIFO queue passing messages around a "pump". Threads (actually, concurrency) also greatly increase complexity, perhaps see the .NET4 "Parallel" methods.An "easy" method is just to configure N "starts" usingParallel.ForEach-- if and only if you can guarantee that the computations are side-effect free.Edit:See comment(s).ShareFolloweditedNov 14, 2010 at 7:55answeredNov 13, 2010 at 17:38user166390user1663902That was an idea, but in this pattern how do I ensure that the result Enumerable is in the same order as the source Enumerable?–Anindya ChatterjeeNov 13, 2010 at 17:471@AnindyaChatterjee: Rather than usingParallel.ForEachuse Parallel Linq (PLINQ) via theParallelEnumerableextension methods, which include theAsOrderedoperator to maintain your input ordering.–RichardNov 13, 2010 at 18:19Add a comment|
How to create a true function pipeline using C#? I got some idea like as follows, but it is not a true pipelinepublic static IEnumerable<T> ForEachPipeline<T>(this IEnumerable<T> source, params Func<T, T>[] pipeline) { foreach (var element in source) { yield return ExecutePipeline(element, pipeline); } } private static T ExecutePipeline<T>(T element, IEnumerable<Func<T, T>> jobs) { var arg = element; T result = default(T); foreach (var job in jobs) { result = job.Invoke(arg); arg = result; } return result; }In the above code each element ofIEnumerable<T>would able to get into the pipeline only after the previous element finishes executing all the functions (i.e. exits the pipeline), but according to the definition ifelement1finishes executingfunc1and start executingfunc2, by that timeelement2should start executingfunc1and so on thus maintaining continues flow of data in the pipeline.Is this kind of scenario possible to implement in C#? If possible please give me some example code.
How to implement true function pipeline in C#?
You can use theForEach-Objectcmdlet, assign the value to a variable, then useWrite-Outputto send the pipeline value to the next cmdlet. When you useSelect-Object, you can access the variable value with a calculated property.Get-Mailbox | ForEach-Object { $primarySmtpAddress = $_.PrimarySMTPAddress; Write-Output $_; | Get-MailboxPermission | Select-Object Identity,User,AccessRights, @{n='PrimarySMTPAddress';e={$primarySmtpAddress}}} | Format-Table -AutoSizeShareFolloweditedNov 18, 2021 at 17:21Marvin Dickhaus7951212 silver badges2727 bronze badgesansweredJan 25, 2016 at 19:34dugasdugas12.2k33 gold badges4646 silver badges5252 bronze badges11Thanks a lot for the prompt answer, it worked perfectly!–Ref.RefJan 25, 2016 at 20:20Add a comment|
I got the following command:Get-Mailbox | Get-MailboxPermission | Select-Object Identity,User,AccessRights | Format-Table -AutoSize. I want to be able to get thePrimarySMTPAddressvalue from the previous pipe where I got the results for theGet-Mailbox. At the moment when I add the propertyPrimarySMTPAddressI receive nothing in the column.The final result should like this:Identity User AccessRights PrimarySMTPAddress -------- ------ ------------ ------------------ Domain.local/Users/Mailbox1 User1 {FullAccess}[email protected]Domain.local/Users/Mailbox2 User2 {FullAccess}[email protected]Domain.local/Users/Mailbox3 User3 {FullAccess}[email protected]
Get value from previous Cmdlet in the pipeline
I've foundThe microarchitecture of Intel, AMD and VIA CPUsto be a good source for questions like yours.ShareFollowansweredSep 21, 2011 at 11:08Pete WilsonPete Wilson8,66866 gold badges4040 silver badges5252 bronze badges0Add a comment|
Does hitting a switch statement in C (assuming that it uses a jump table) empty the pipeline of an x86 processor? I'm thinking that it might because it would need the result of the table lookup to know what instruction to execute next. Can it forward that result back early enough that the pipeline doesn't get totally emptied?
Do switch statements in C empty the x86 pipeline?
Both operations affectR2. Ifi2write back occurs before the write back ofi1, but the write back ofi1does eventually occur, thenR2will have the result ofR4 + R7instead of the value ofR1 + R3.The WAW Hazard is about result values being overwritten by a subsequent write that should not have overwritten that value.ShareFollowansweredMay 28, 2019 at 20:02Thomas JagerThomas Jager4,88622 gold badges1818 silver badges3232 bronze badgesAdd a comment|
Wikipedia'sHazard (computer architecture)article:Write after write (WAW) (i2tries to write an operand before it is written byi1) A write after write (WAW) data hazard may occur in a concurrent execution environment.Example For example:i1. R2 <- R4 + R7 i2. R2 <- R1 + R3The write back (WB) ofi2must be delayed untili1finishes executing.I haven't understood this.What is the problem ifi2is executed beforei1?
What is WAW Hazard?
...A few years later. Usemake_pipeline()to concatenate scikit-learn estimators as:new_model = make_pipeline(fitted_preprocessor, fitted_model)ShareFollowansweredNov 19, 2021 at 10:15CristobalCristobal31911 silver badge66 bronze badges0Add a comment|
I have data at two stages:import numpy as np data_pre = np.array([[1., 2., 203.], [0.5, np.nan, 208.]]) data_post = np.array([[2., 2., 203.], [0.5, 2., 208.]])I also have two pre-existing fitted estimators:from sklearn.preprocessing import Imputer from sklearn.ensemble import GradientBoostingRegressor imp = Imputer(missing_values=np.nan, strategy='mean', axis=1).fit(data_pre) gbm = GradientBoostingRegressor().fit(data_post[:,:2], data_post[:,2])I need to pass a fitted pipeline anddata_preto another function.def the_function_i_need(estimators): """ """ return fitted pipeline fitted_pipeline = the_function_i_need([imp, gbm]) sweet_output = static_function(fitted_pipeline, data_pre)Is there a way to combine these two existing and fitted model objects into a fitted pipeline without refitting the models or am I out of luck?
Combine two fitted estimators into a pipeline
I have solve the problem by following code :val p: TestPipeline = TestPipeline.create().enableAbandonedNodeEnforcement(false)ShareFollowansweredAug 5, 2019 at 16:05Saeed MohtashamSaeed Mohtasham1,7911717 silver badges2828 bronze badges1I am not sure howenableAbandonedNodeEnforcementsolves the problem but note thatTestPipeline implements org.junit.rules.TestRule(beam.apache.org/releases/javadoc/2.11.0/org/apache/beam/sdk/…) and therefore should be used as aRule–DanielDec 8, 2021 at 18:05Add a comment|
When I try to run a test pipeline it raise an errorhere is the source code to create the test pipeline:val p: TestPipeline = TestPipeline.create()and here is the error :java.lang.IllegalStateException: Is your TestPipeline declaration missing a @Rule annotation? Usage: @Rule public final transient TestPipeline pipeline = TestPipeline.create();
Initializing Apache Beam Test Pipeline in Scala fails
Vitestsupports c8 and istanbulfor coverage, they both support Cobertura format.Wherever you define your test settingsvite.config.tsorvitest.config.tsset the coverage reporter to generate the cobertura format.import { defineConfig } from 'vitest/config' export default defineConfig({ test: { coverage: { reporter: ['cobertura', 'text'], }, }, })It will output by default at/coverage/cobertura-coverage.xml. You can then feed this into thePublishCodeCoverageResultstask.ShareFollowansweredJan 11, 2023 at 3:08Steven B.Steven B.9,20133 gold badges2626 silver badges4747 bronze badges1I tried to follow your instructions but I didn't find a solution. I opened a thread herestackoverflow.com/questions/76677625/…–gioJul 13, 2023 at 9:14Add a comment|
I'm trying to run an azure devops pipeline that contains a vitest run with coverage. The issue is that the azure coverage collector plugin accepts only jacoco/cobertura formats. I've seen that for jest is it possible to run with a cobertura reporter. Is there anyway of doing this for vitest?Thank you
Vitest coverage on azure devops
i find a way to solve the problem myself:pip_model = cv_model.bestModel pip_model.write().overwrite().save("/Users/fushuai/PyCharmProject/CTREstimation/model")ShareFollowansweredJun 25, 2018 at 5:41shuai fushuai fu12122 silver badges66 bronze badges0Add a comment|
I use pyspark.ml.tuning.CrossValidator and Pipline to train a CrossValidatorModel named cv_model,after that,I want to persist the model, so i use cv_model.save to save my model,but an error occured:AttributeError: 'Pipeline' object has no attribute '_transfer_param_map_to_java'mycodeerror messageI do not know how to solve the error. thanks for help me!
AttributeError: 'Pipeline' object has no attribute '_transfer_param_map_to_java'
The thing with your solution is that it only avoids pipeline execution when you have a merge request event, but there still will be duplicate pipelines, e.g. merge-request pipelines (thedetachedones) and branch pipelines (others), also when pushing a tag your setup will create a separate pipeline I think.Following the docs you can avoid duplicate pipelines and switch between Branch- and MR-Pipelines when using the following rules-set for workflow (I added the|| $CI_COMMIT_TAG) since when pushing a tag also a pipeline should be created (but maybe only a few jobs will be added into this pipeline)workflow: rules: - if: '$CI_PIPELINE_SOURCE == "merge_request_event"' - if: '$CI_COMMIT_BRANCH && $CI_OPEN_MERGE_REQUESTS' when: never - if: '$CI_COMMIT_BRANCH' || '$CI_COMMIT_TAG'this pipeline is a merge-request pipeline, you can see this because it'sdetachedand because of the Merge-Request Symbol and the number of the MR on the left hand side of the commit idThe following screenshot shows a 'normal' branch-pipeline, which is denoted by the branch-name and the GitLab branch symbol on the left of your commit idShareFolloweditedDec 2, 2021 at 21:07answeredDec 2, 2021 at 21:02SPMSESPMSE57844 silver badges1616 bronze badgesAdd a comment|
When pushing a commit two pipeline jobs are triggered. But the same thing has not occurred when starting the pipeline manually.Where I should check? What is the meaning of the arrow from the left or from the right indicating branch activities?One thing I have to say is that there is a merge request pending, does it cause this issue?
Gitlab Double pipeline triggering issue
I think multiline parameters are not supported natively but you can useobjectto pass a multiline string. The way it can be done is by adding a yaml object that will contain the multiline string:eg.foo: | Multiline text in parameterThen you can accessfooby writing${{ parameters.Multiline.foo }}.This is the pipeline code:parameters: - name: Multiline type: object pool: vmImage: 'ubuntu-latest' steps: - bash: | cat >> script.txt << EOL ${{ parameters.Multiline.foo }} EOL cat script.txtShareFolloweditedJul 8, 2021 at 15:20answeredJul 8, 2021 at 12:16Murli PrajapatiMurli Prajapati9,23355 gold badges3939 silver badges5555 bronze badges2Can parameters be one line with type on the same line? Or more compact, line wise?–RodMar 19, 2023 at 15:37DevOps complains that I am not passing the Multiline parameter when it parses the pipeline and template. In the pipeline I have a parameter called foo, but the template is expecting a parameter called Multiline. Whant am I missing here?–Joe BrinkmanJun 30, 2023 at 15:59Add a comment|
Is it possible in the azure pipeline to pass a multi-line parameter? Iftypeis astring, you can't even write with a newline. If on the other hand thetypeisobject, you can enter multi-line, but all EOLs in the variable will be removed.parameters: - name: Multiline type: objectIf I save the parameter to a text file, the result is one-line- bash: | echo ${{ parameters.Multiline }} >> script.txt cat script.txt
Azure Pipelines: multiline parameter
SincePipelineModels are validstages for aPipelieModelclass, you should be able to use this which does not requirefiting again:pipe_model_new = PipelineModel(stages = [pipe_model , pipe_model2]) final_df = pipe_model_new.transform(df1)ShareFollowansweredMar 22, 2018 at 6:55mostOfMajoritymostOfMajority12444 bronze badgesAdd a comment|
I have a saved PipelineModel:pipe_model = pipe.fit(df_train) pipe_model.write().overwrite().save("/user/pipe_text_2")And now I want to add to this Pipe a new already fited PipelineModel:pipe_model = PipelineModel.load("/user/pipe_text_2") df2 = pipe_model.transform(df1) kmeans = KMeans(k=20) pipe2 = Pipeline(stages=[kmeans]) pipe_model2 = pipe2.fit(df2)Is that possible without fitting it again? In order to obtain a new PipelineModel but not a new Pipeline. The ideal thing would be the following:pipe_model_new = pipe_model + pipe_model2 TypeError: unsupported operand type(s) for +: 'PipelineModel' and 'PipelineModel'I've foundJoin two Spark mllib pipelines togetherbut with this solution you need to fit the whole Pipe again. That is what I'm trying to avoid.
Spark add new fitted stage to a exitsting PipelineModel without fitting again
You can wrap multiple commands in singleSciptBlockand invoke it:& { Get-Content f1.txt | % { $_ #SOME OPERATION } Get-Content f2.txt } | % { #COMBINED OPERATIONS on f1.txt output and f2.txt } > output.txtShareFollowansweredAug 8, 2016 at 15:56user4003407user400340721.7k44 gold badges5252 silver badges6161 bronze badges1Thanks! Worked like a charm.–hazrmardAug 8, 2016 at 16:00Add a comment|
Let's say I have two files,f1.txtandf2.txt.f1.txthas some corrections that need to be made, after which both files will need to be processed together. How do I merge the output off1.txtcorrections to the pipeline withf2.txtdata?Here is an illustration:Get-Content f1.txt | % { $_ #SOME OPERATION } # How do I merge this output into the next pipeline? Get-Content f2.txt | % { #COMBINED OPERATIONS on f1.txt output and f2.txt } > output.txtI understand I can save the first operation into a temporary file and read from it again for the combined operation:... } > temp.txt Get-Content temp.txt, f2.txt | ...But is there a way to do it without creating buffer files?
How to add to a pipeline in PowerShell?
You can select all the data sources, then apply theGroup Datasetsfilter. This will create a single data set (without actually duplicate the heavy data). Now you can just apply a single Clip filter.You can alternatively use theLinks Manager, accessible fromTools | Manage Linksto set up property links between several Clip filters. This, however, had issues in some old versions. I'd suggest using 4.1.0 if you're going this route.ShareFollowansweredMar 25, 2014 at 3:15UtkarshUtkarsh1,50211 gold badge1111 silver badges1919 bronze badges1Great, I have 4.0.1 so I will try with Links Manager.–eudoxosMar 25, 2014 at 22:15Add a comment|
I have several data source in Paraview, and clip all of them (with the Clip filter) to only the are of interest I need to see at this moment. It istedious to set the clip domain always three (or more) times. Is it somehow possible to share those parameters, or apply the same filter instance to multiple data sources?
Applying clip filter to several sources
What you can do is create a custom Data Provider to connect to your external SQL database. This way you can expose the external data to Sitecore as if it were native data.SeeWhen to Implement Data Providers in the Sitecore ASP.NET CMSfor more information.ShareFolloweditedJan 6, 2020 at 20:37James Skemp8,08899 gold badges6565 silver badges108108 bronze badgesansweredJun 15, 2013 at 21:03Martijn van der PutMartijn van der Put4,06211 gold badge1818 silver badges2626 bronze badges31"Custom Data Provider" that sounds exactly like the thing I need, and thanks for the links.–KMNJun 16, 2013 at 12:38Hi Martijn, both the links are not working :-(. I know it's been a long time. By any chance could you please provide any updated references?–PrawinMay 2, 2017 at 18:33Hence the problem with link-only answers. I've fixed the link I could find a replacement for and put in a title in case Sitecore changes URLs again, and removed the other.–James SkempJan 6, 2020 at 20:38Add a comment|
What would be a good way, and good practice when "integrating" an external SQL database, in a Sitecore project.The Sitecore project will get alot of its content from this external database, which is maintained elsewhere, and is constantly updated. (so copying the external database or syncing, is not really prefered, and we dont plan on enriching the data either)Are there some method of defining some objects and "pipelines" between the Sitecore, and the external database (say without having to use, too many webservices)
Sitecore external database integration
You can't make a pipeline conditional because the commands are run in parallel. IfCweren't run untilBexited, where wouldB's output be piped to?You'll need to storeB's output in a variable or a temporary file. For instance:out=$(mktemp) trap 'rm "$out"' EXIT if A | B > "$out"; then C < "$out" else cat "$out" fi | D | EShareFolloweditedJan 17, 2017 at 23:38answeredJan 17, 2017 at 23:25John KugelmanJohn Kugelman355k6969 gold badges540540 silver badges582582 bronze badges3This looks like the solution I needed. Thanks.–Dave CloseJan 18, 2017 at 0:03Put it into a function,export -fit, and it is ready for GNU Parallel–Ole TangeJan 18, 2017 at 7:21Instead of themktempand thetrapyou can use $PARALLEL_TMP.–Ole TangeJan 18, 2017 at 7:27Add a comment|
Given a pipeline something like "A|B|C|D|E", I want to make step C conditional on the result of step B. Something like this:A | B | if [ $? -ne 0 ]; then C; else cat; fi | D | EBut this doesn't seem to work; C is never executed no matter what the result of B. I'm looking for a better solution.I understand that each step of a pipeline runs in its own subshell. So I can't pass an environment variable back to the pipeline. But this pipeline is in a Gnu Parallel environment where many such pipelines are running concurrently and none of them knows any unique value (they just process the data stream and don't need to know the source, the parent script handles the necessary separation). That means that using a temporary file isn't practical, either, since there isn't any way to make the file names unique. Even $$ doesn't seem to give me a value that is the same in each of the steps.
Conditional step in a pipeline
Aren't you just processing an iterator? The natural way to use an iterator in Python is the for loop:for item in all_items(): item.get("created_by").location().surrounding_cities()There are other possibilities such as list comprehensions which may make more sense depending upon what you're doing (usually makes more sense if you're trying to generate a list as your output).ShareFollowansweredMay 31, 2011 at 14:06John Gaines Jr.John Gaines Jr.11.3k11 gold badge2626 silver badges2525 bronze badgesAdd a comment|
Is there a library or recommended way for creating an iterator pipeline in Python?For example:>>>all_items().get("created_by").location().surrounding_cities()I also want to be able to access attributes of the objects in the iterators. In the above example,all_items()returns an iterator of items, and the items have different creators.The.get("created_by")then returns the "created_by" attribute of the items (which are people), and thenlocation()returns the city of each person, and then pipes it tosurrounding_cities(), which returns all an iterator of the surrounding cities for each location (so the end result is a large list of surrounding cities).
How to create an iterator pipeline in Python?
You can define something similar to your(??)operator fairly easily (but operators can't start with a question mark):let (~??) f x = if (x <> null) then f xUnfortunately, your pipelined code will need to be a bit more verbose (also, note that you can drop thenewkeyword for calling constructors):Class1() |> fun x -> x.Method1()Putting it all together:Class1() |> fun x -> x.Method1() |> ~?? (fun x -> x.Method2())ShareFollowansweredSep 17, 2009 at 19:09kvbkvb55k22 gold badges9191 silver badges133133 bronze badgesAdd a comment|
Is it possible to call a method on a returned object using the pipeline infix operator?Example, I have a .Net class (Class1) with a method (Method1). I can currently code it like this:let myclass = new Class1() let val = myclass.Method1()I know I could also code it as suchlet val = new Class1().Method1()However I would like to do be able to pipeline it (I am using the ? below where I don't know what to do):new Class1() |> ?.Method1()Furthermore, say I had a method which returns an object, and I want to only reference it if that method didn't return null (otherwise bail?)new Class1() |> ?.Method1() |> ?? ?.Method2()Or to make it clearer, here is some C# code:public void foo() { var myclass = new Class1(); Class2 class2 = myclass.Method1(); if (class2 == null) { return; } class2.Method2(); }
Is it possible to use the pipeline operator to call a method on a returned object?
For this, you can have a look at theuser guidewhere it says under the paragraph for nested parameters:Individual steps may also be replaced as parameters, and non-final steps may be ignored by setting them to'passthrough'In your case, I would define a grid with a list of two dictionaries, one in case the whole pipeline is used, and one where thePCAis omitted:param_grid = [ { 'pca__n_components': [3, 5, 7], 'gbc__n_estimators': [50, 100] }, { 'pca': ['passthrough'], # skip the PCA 'gbc__n_estimators': [50, 100] } ]GridSearchCVwill now span the grids according to each dictionary in the list and try combinations with and withoutPCA.ShareFolloweditedMay 27, 2021 at 17:40answeredMay 27, 2021 at 17:33afsharovafsharov4,91422 gold badges1010 silver badges2727 bronze badges1I haven't seen anyone use this before, much simpler answer than I was expecting–Jacob MyerMay 27, 2021 at 17:39Add a comment|
This got closed the first time I asked it becausethis questionasks something similar. However despite the answers showing how to add/remove from a step from the pipeline, none of them show how this works withGridSearchCVand I'm left wondering what to do with the pipeline that I've removed the step from.I'd like to train a model using a grid search and test the performance both when PCA is performed first and when PCA is omitted. Is there a way to do this? I'm looking for more than simply settingn_componentsto the number of input variables.Currently I define my pipeline like this:pca = PCA() gbc = GradientBoostingClassifier() steps = [('pca', pca), ('gbc', gbc)] pipeline = Pipeline(steps=steps) param_grid = { 'pca__n_components': [3, 5, 7], 'gbc__n_estimators': [50, 100] } search = GridSearchCV(pipeline, param_grid, n_jobs=-1, cv=5, scoring='roc_auc')
Is there a way for sklearn pipeline to train with and without a step during a grid search? I can remove steps but how do i pass this to GridSearchCV?
By default in bash, the success or failure of a pipeline is determined solely by the last command in the pipeline.You may however enable thepipefailoption (set -o pipefail) and the pipeline will return failure if any command in the pipeline fails.ExampleThis pipeline succeeds:$ false | true | false | true ; echo $? 0This pipeline fails:$ set -o pipefail $ false | true | false | true ; echo $? 1DocumentationFromman bash:The return status of a pipeline is the exit status of the last command, unless the pipefail option is enabled. If pipefail is enabled, the pipeline's return status is the value of the last (rightmost) command to exit with a non-zero status, or zero if all commands exit successfully.ShareFollowansweredMar 13, 2018 at 23:42John1024John1024112k1414 gold badges144144 silver badges176176 bronze badgesAdd a comment|
Recently I'm learning theset -eof POSIX shell on Ubuntu 14.04. My reference material is the "IEEE Std 1003.1-2008, 2016 Edition", "Shell & Utilities" chapter. Fromthis sectionI see-edoesn't cause the script to quit when the command fails in a pipeline (unless the failed command is the last one in the pipeline):The failure of any individual command in a multi-command pipeline shall not cause the shell to exit. Only the failure of the pipeline itself shall be considered.I then wrote a simple script to confirm this behavior:( set -e false | true | false | true echo ">> Multi-command pipeline: Last command succeeded." ) ( set -e false | true | false echo ">> Multi-command pipeline: Last command failed." )The "Last command succeeded" message is printed out, while the "Last command failed" message is not.My questions are:The chained commandsfalse | true | falsedon't seem to be a failure of the pipeline. It's just the failure of the last command.The pipeline itself still succeeds. Am I right??Is there a way to simulate a pipeline failure??We can usefalseto simulate the failure of a command. Is there a similar command for a pipeline?
How to cause a Linux pipeline to fail?
It looks like gstreamer cannot find a suitable plugin for decoding H264. Either you do not have an H264 decoder element installed, or gstreamer is looking in the wrong path for your elements.First, try runninggst-inspect-1.0. This should output a long list of all the elements gstreamer has detected.If this doesn't return any elements, you probably need to set the GST_PLUGIN_PATH environment variable to point to the directory where your plugins are installed.Running Gstreamer- This link should help.If it DOES return many elements, rungst-inspect-1.0 avdec_h264to verify that you have the H264 decoder element.ShareFollowansweredApr 7, 2016 at 18:00jfoytikjfoytik91077 silver badges33 bronze badges4Inspection shows that all the plugins are available so I shouldn't need the plugin path. I have noavdec_h264but I believex264does the decoding and it's listed in inspect output and in/usr/lib/gstreamer-1.0/libgstx264.so.–paxdiabloApr 15, 2016 at 2:265As far as I know there is no x264 decoder, only an encoder, seehere. You will need to install/compile gst-libav to get the avdec_h264 element for decoding H264.–jfoytikApr 15, 2016 at 14:138gst-libav was indeed what I was also missing. To install it on Ubuntu 16.04 I executed:sudo apt-get install gstreamer1.0-libav–meztelentsigaMay 24, 2018 at 9:011GST_PLUGIN_PATH=/usr/lib/x86_64-linux-gnu/gstreamer-1.0/ gst-inspect-1.0 avdec_h264 and it is still "not found"–Marat ZakirovOct 15, 2020 at 14:46Add a comment|
I'm sure I've had this pipeline working on an earlier Ubuntu system I had set up (formatted for readability):playbin uri=rtspt://user:[email protected]/ch1/main video-sink='videoconvert ! videoflip method=counterclockwise ! fpsdisplaysink'Yet, when I try to use it within my program, I get:Missing element: H.264 (Main Profile) decoder WARNING: from element /GstPlayBin:playbin0/GstURIDecodeBin:uridecodebin0: No decoder available for type 'video/x-h264, stream-format=(string)avc, alignment=(string)au, codec_data=(buffer)014d001fffe10017674d001f9a6602802dff35010101400000fa000030d40101000468ee3c80, level=(string)3.1, profile=(string)main, width=(int)1280, height=(int)720, framerate=(fraction)0/1, parsed=(boolean)true'. Additional debug info: gsturidecodebin.c(938): unknown_type_cb (): /GstPlayBin:playbin0/GstURIDecodeBin:uridecodebin0Now I'm pretty certain Ihavean H264 decoder installed and indeed the gstreamer pluginsautogen.sh/configurecorrectly recognised the fact. Installed packages areh264enc,libx264-142,libx264-devandx264.It does exactly the same thing if I use the more "acceptable"autovideosinkin place offpsdisplaysink, or if I try to play the RTSP stream withgst-play-1.0. However, it works if I use the test pattern sourcevideotestsrc.What am I doing wrong?
What's wrong with this GStreamer pipeline?
I finally made the query work. I am answering my own question for future reference. Since I am using 2d index and not 2dsphere, I didn't have to usecoordinatesin near operator. This query works:db.tm_stops.aggregate([ { $geoNear: { near: [-82.958841, 42.370114] , distanceField: "stops.calculated", query: { agency_key: "DDOT"}, includeLocs: "stops.loc" } } ])But I discovered, in the returned document, i can only see the calculated distance and the (lon, lat) pair used to calculate the distance. I can't really returned the whole embedded document which is used for the calculation.If I find a way to return the whole embedded document, I'll update my answer.ShareFollowansweredDec 2, 2015 at 19:10melismelis1,18533 gold badges1313 silver badges3131 bronze badges11addingkeylike in the doc was not sufficient. Adding alsoincludeLocshelps...–FarandoleJun 15, 2020 at 20:21Add a comment|
My document structure is like the following:{ agency_key : '', route_id: '', direction_id: '', stops: [ {stop_id:15, stop_lat: '', stop_lon: '', loc: [-83.118972, 42, 121567] }, {... } ] }I want to find the stops which are close to a given (lat,lon) pair within a given distance from every document in the collection. I created 2d indexes. I tried $unwind, then $geoNear but it says $geoNear can only be at the first stage of the pipeline. I tried this:db.tm_stops.aggregate([ ... { ... $geoNear: { near: {coordinates: [-82.958841, 42.370114] }, distanceField: "stops.calculated", query: { agency_key: "DDOT"}, ... includeLocs: "stops.loc" } ... } ... ])I tried the following:db.tm_stops.find({stops:{$near:[-82.958841, 42.370114], $maxDistance: 1 } } )It throws this error:error: { "$err" : "Unable to execute query: error processing query: ns=gtfs.tm_stops limit=0 skip=0\nTree: GEONEAR field=stops maxdist=1 isNearSphere=0\nSort: {}\nProj: {}\n planner returned error: unable to find index for $geoNear query", "code" : 17007What should be my mongo query? I will need to execute this from a Node.js application but I want to see some result in Mongo shell first. Thanks in advance.
How to use $geoNear on array of embedded documents?
Short form: You can't (without writing some code), and it's a feature, not a bug.If you're doing things in a safe way, you're protecting your data from being parsed as code (syntax). What you explicitly want here, however, is to treat data as code, but only in a controlled way.What you can do is iterate over elements, useprintf '%q ' "$element"to get a safely quoted string if they aren't a pipeline, and leave them unsubstituted if they are.After doing that, and ONLY after doing that, can you safely eval the output string.eval_args() { local outstr='' while (( $# )); do if [[ $1 = '|' ]]; then outstr+="| " else printf -v outstr '%s%q ' "$outstr" "$1" fi shift done eval "$outstr" } eval_args "${pipeline[@]}"By the way -- it's much safer NOT TO DO THIS. Think about the case where you're processing a list of files, and one of them is named|; this strategy could be used by an attacker to inject code. Using separate lists for the before and after arrays, or making only one side of the pipeline an array and hardcoding the other, is far better practice.ShareFolloweditedJun 8, 2013 at 1:19answeredJun 7, 2013 at 21:04Charles DuffyCharles Duffy287k4343 gold badges408408 silver badges455455 bronze badges11+1, but the recommendation to not write code like thiscannotbe stressed enough.–chepnerJun 8, 2013 at 13:02Add a comment|
How can I run a command line from abasharray containing a pipeline?For example, I want runls | grep xby means of:$ declare -a pipeline $ pipeline=(ls) $ pipeline+=("|") $ pipeline+=(grep x) $ "${pipeline[@]}"But I get this:ls: cannot access |: No such file or directory ls: cannot access grep: No such file or directory ls: cannot access x: No such file or directory
Run a bash array with pipes
I had same issue, it was because of "dark mode extension" on google chrome. It was ok on firefox, but chrome not. I suggest to disable dark mode extension on chrome when using jenkins.ShareFollowansweredMar 1, 2022 at 14:30AdemAdem2144 bronze badgesAdd a comment|
I was using Jenkins for a while and everything was working perfectly fine. But two or three days back i need to change some configuration for one of my project and when i hit apply button after performing those change. This error screen appears. Even i don't change anything and click apply button same error screen pops up.I am currently using Jenkins version 2.265 this error occur when i hit this button
Jenkins apply button pops up blank error page
You can adda custom conditionfor the publish artifacts step:and(succeeded(), ne(variables['Build.Reason'], 'PullRequest'))Now the step will run only when the build reason isnotpull request.ShareFollowansweredSep 21, 2020 at 7:31Shayki AbramczykShayki Abramczyk39k1717 gold badges9797 silver badges121121 bronze badgesAdd a comment|
I am using Azure DevOps and I have a single build pipeline with a number of steps includingPublishBuildArtifactsdefined in theazure-pipelines.ymlfile.I have pointed the same pipeline for the Build Validation (Validate code by pre-merging and building pull request changes.) from the master branch's build policies option. However, for this PR build run, I don't to run certain tasks likePublishBuildArtifacts.How can I achieve this? I can think of one way which is to create a separate pipeline for PR and also a separateazure-pipelines-pr.ymlfile and not adding those tasks in that file. But this feels like an approach with redundancy to me. Is there any better way to achieve this?
Azure DevOps how to skip PublishBuildArtifacts step for PR build runs
You cannot know this unless you check the logs.There is an open issue about this:https://gitlab.com/gitlab-org/gitlab-ce/issues/31679ShareFollowansweredApr 4, 2019 at 9:46djuarezgdjuarezg2,39522 gold badges2020 silver badges3434 bronze badgesAdd a comment|
I can see who creates the Gitlab pipeline/job, however, is it possible to see who canceled it? Even better to receive a notification if it was canceled by someone.As shown from the screenshot, the job is canceled, but not by me, and the output log is empty.BTW, I checked the other job contains log and canceled while running, but still couldn't find who canceled it.
Is it possible to see who canceled the Gitlab pipeline?
Both popular Java Redis clients,Jedisandlettuceprovide async/pipelining. Seeherefor an example. You get aFutureafter issuing commands so you can sync on your own or work with future callbacks.ShareFollowansweredFeb 1, 2015 at 11:10mp911demp911de17.8k22 gold badges5757 silver badges9797 bronze badgesAdd a comment|
I am looking at the possibility of triggering the redis command from the client as a normal api and the library can pipeline the commands into it and possibly reply asynchronously back. Any library for the Java would be highly appreciated.Any pointers to opensource work on the same lines would also be of the great help.
Is there a redis library which handles the pipelining automatically?
As it stands thefoldl'you are getting fromFoldableis defined in terms of the foldr you gave it. The default implementation is the brilliant and surprisingly goodfoldl' :: (b -> a -> b) -> b -> t a -> b foldl' f z0 xs = foldr f' id xs z0 where f' x k z = k $! f z xBut foldl' is the specialty of your type; fortunately theFoldableclass includesfoldl'as a method, so you can just add this to your instance.foldl' op acc0 (Stream sf s0) = loop s0 acc0 where loop !s !acc = case sf s of Nothing -> acc Just (a,s') -> loop s' (op acc a)For me this seems to give about the same time asbestcaseNote that this is a standard case where we need a strictness annotation on the accumulator. You might look in thevectorpackage's treatment of a similar typehttps://hackage.haskell.org/package/vector-0.10.12.2/docs/src/Data-Vector-Fusion-Stream.htmlfor some ideas; or in the hidden 'fusion' modules of the text libraryhttps://github.com/bos/text/blob/master/Data/Text/Internal/Fusion.ShareFollowansweredDec 29, 2014 at 8:19MichaelMichael2,8891818 silver badges1616 bronze badgesAdd a comment|
I've made a type which is supposed to emulate a "stream". This is basically a list without memory.data Stream a = forall s. Stream (s -> Maybe (a, s)) sBasically a stream has two elements. A states, and a function that takes the state, and returns an element of typeaand the new state.I want to be able to perform operations on streams, so I've importedData.Foldableand defined streams on it as such:import Data.Foldable instance Foldable Stream where foldr k z (Stream sf s) = go (sf s) where go Nothing = z go (Just (e, ns)) = e `k` go (sf ns)To test the speed of my stream, I've defined the following function:mysum = foldl' (+) 0And now we can compare the speed of ordinary lists and my stream type:x1 = [1..n] x2 = Stream (\s -> if (s == n + 1) then Nothing else Just (s, s + 1)) 1 --main = print $ mysum x1 --main = print $ mysum x2My streams are about half the speed of lists (full codehere).Furthermore, here's a best case situation, without a list or a stream:bestcase :: Int bestcase = go 1 0 where go i c = if i == n then c + i else go (i+1) (c+i)This is a lot faster than both the list and stream versions.So I've got two questions:How to I get my stream version to be at least as fast as a list.How to I get my stream version to be close to the speed ofbestcase.
Speeding up a stream like data type
I would suggest to install all the required plugins and then restart your Jenkins server and if you are running this locally then a system restart might be helpful.ShareFollowansweredOct 26, 2021 at 4:47Aman ChourasiyaAman Chourasiya1,14811 gold badge1010 silver badges2424 bronze badgesAdd a comment|
I have created a new job in Jenkins using pipeline. After this I have provided Gitlab project url and Jenkinsfile path in SCM. While building the pipeline I am not able to see any message between start pipeline and end pipeline. While putting invalid code to JenkinsFile, build is failing but when running simple command like echo its not printing anything to console. Is there anything I am missing?Console Output[Pipeline] Start of Pipeline [Pipeline] End of Pipeline Finished: SUCCESSJenkinsfilepipeline { agent any stages { stage ('Build') { steps { echo 'Running build phase. ' } } } }console outputJenkinsfile code
Jenkins console output not printing anything between start and end of pipeline
you are using the wrong endpoint, to do it, you need to follow the path belowlist all of your pipelines and get the newest oneGET /projects/:id/pipelineslist the jobs from this pipelineGET /projects/:id/pipelines/:pipeline_id/jobsAfter that you can trigger your jobPOST /projects/:id/jobs/:job_id/playShareFollowansweredJul 21, 2020 at 13:15Sergio TanakaSergio Tanaka1,41511 gold badge88 silver badges1818 bronze badges2Thanks, but what if my pipeline is not a newest one, for example there is a newer pipe, created from another branch, so i can't get pipeline with :id, i know only the ref name?–coolsvJul 21, 2020 at 13:36When you are working with api's, most of the time you will be using IDs. Another approach is to get your MR from theAPIto get the ID of the pipeline–Sergio TanakaJul 21, 2020 at 13:41Add a comment|
I have gitlab project with ci file:stages: - build - run build: stage: build script: - (some stuff) tags: - my_runner_tag except: - triggers when: manual run: stage: run script: - (some stuff) tags: - my_runner_tag except: - triggers when: manualJobs are created on every source code change, and they can be run only manually, using gitlab interface. Now, i want to have possibility to trigger stagerunwith Gitlab API. Trying:curl -X POST \ > -F token=xxxxxxxxxxxxxxx \ > -F ref=master \ > https://gitlab.xxxxx.com/api/v4/projects/5/trigger/pipelineReturns:{"message":{"base":["No stages / jobs for this pipeline."]}}Seems, like i have to define stage to trigger, but i can't find a way to pass it via api call.
How to trigger only specific stage of pipeline with gitlab API?
Yes, you can only pass artifacts frompreviousstages.By default, all artifacts from all previous stages are passed, but you can use the dependencies parameter to define a limited list of jobs (or no jobs) to fetch artifacts from.To use this feature, define dependencies in context of the job and pass a list of all previous jobs from which the artifacts should be downloaded. You can only define jobs from stages that are executed before the current one. An error will be shown if you define jobs from the current stage or next ones.https://docs.gitlab.com/ee/ci/yaml/#dependenciesAlthough they might add support forneeds:in future to reference jobs in thecurrentstage:https://gitlab.com/gitlab-org/gitlab/issues/30632ShareFolloweditedJun 20, 2020 at 9:12CommunityBot111 silver badgeansweredMar 29, 2020 at 16:51IvanIvan9,36744 gold badges6363 silver badges7575 bronze badgesAdd a comment|
Having some multiple jobs (not in parallel) and I'm trying to pass artifacts from the first job to the second one.Here what it's look like:deploy-build-docker 1/2: stage: deploy image: docker:stable script: - ... artifacts: paths: - path deploy-preprod 2/2: stage: deploy image: alpine dependencies: [deploy-build-docker] script: - ....CI can't find dependencies, and give me this errordeploy-preprod 2/2 job: undefined dependency: deploy-build-dockerI also trieddeploy,deploy-build-docker 1/2but I still have the same issue.So how can i do that? Should I must do the build in another stage?
gitlab ci: Pass artifacts from two jobs of the same stage
It does, at least, according to Lettuce:https://github.com/lettuce-io/lettuce-core/wiki/Pipelining-and-command-flushingCommand flushing is an advanced topic and in most cases (i.e. unless your use-case is a single-threaded mass import application) you won’t need it as Lettuce uses pipelining by default.It means, it uses pipelining by default, I don't think this is true or clear, as it also mentions:A flush is an expensive system call and impacts performance. Batching, disabling auto-flushingThe AutoFlushCommands state is set per connection and therefore affects all threads using the shared connection. If you want to omit this effect, use dedicated connections. The AutoFlushCommands state cannot be set on pooled connections by the lettuce connection pooling.Which means, if you want true Redis pipelininghttp://redis.io/topics/pipelining, you can do that as well with some extra effort.ShareFollowansweredOct 29, 2019 at 23:51kisnakisna2,95911 gold badge2626 silver badges3030 bronze badges11April 2023 - Lettuce's own documentation is out of date -lettuce.io/core/release/api/io/lettuce/core/api/async/…disable autoflush on the connection and not the Commands object.–DavidApr 6, 2023 at 22:18Add a comment|
Need to run a batch of commands in redis-cluster mode with lettuce.For commands that should run in one partition, i hope to run them in one node sequentially.As i know, lettuce can support redis pipelining by set the AutoFlushCommands state to be false. But in redis-cluster mode, the command may be send to different nodes in one partition. Is there any way to avoid the problem?
Does lettuce support pipelining in redis-cluster?
The code from less#if HAVE_DUP /* * Force standard input to be the user's terminal * (the normal standard input), even if less's standard input * is coming from a pipe. */ inp = dup(0); close(0); #if OS2 /* The __open() system call translates "/dev/tty" to "con". */ if (__open("/dev/tty", OPEN_READ) < 0) #else if (open("/dev/tty", OPEN_READ) < 0) #endif dup(inp); #endifIt opens a direct stream from /dev/tty as well as whatever your stdin is.ShareFollowansweredOct 25, 2017 at 10:5712312311k22 gold badges2222 silver badges4646 bronze badges2Does it means that less is still reading from wc output AND from /dev/tty ?–totoOct 25, 2017 at 11:55Yes. The data it reads from its standard input (which is the pipe whose other end is used as the standard output ofwc) is what it writes its standard output. The data it reads from/dev/ttyis what it interprets as interactive commands.–chepnerOct 25, 2017 at 13:36Add a comment|
i'm currently looking how pipelining is managed into shells. for example, in my shell, if i enter "ls | wc | less". The result of this operation will be the creation of three process, ls wc and less. Ouput of ls will be piped to the enter input of wc, and the ouput of wc will be piped to the enter intput of less.For me, it means that during the execution of "ls | wc | less". The standard input of less will not be the keyboard, but the ouput of wc. But, less will still be responsive to my keyboard. Why ? I don't understand, because for me, less should not be sensitive to the keyboard since it have been piped.Do somebody have an idea ? Thanks
Pipeline management in linux shell
I ended up just fiddling around with the code and found how to get the job details. Our next step is to see if there is a way to get a list of all of the jobs.# start the pipeline process pipeline = p.run() # get the job_id for the current pipeline and store it somewhere job_id = pipeline.job_id() # setup a job_version variable (either batch or streaming) job_version = dataflow_runner.DataflowPipelineRunner.BATCH_ENVIRONMENT_MAJOR_VERSION # setup "runner" which is just a dictionary, I call it local local = {} # create a dataflow_client local['dataflow_client'] = apiclient.DataflowApplicationClient(pipeline_options, job_version) # get the job details from the dataflow_client print local['dataflow_client'].get_job(job_id)ShareFollowansweredNov 22, 2016 at 17:49T.OkaharaT.Okahara1,22422 gold badges1616 silver badges2828 bronze badges2Hey @T.Okahara, any change you have figured out how to do this with a dataflow template?–jmoore255Sep 18, 2018 at 15:36Sorry @jmoore255 other than the code above we didn't work more on getting our pipelines running in Cloud Dataflow. We actually built our own locally running machine to run our processes on since we found other issues with running on Dataflow like not allowing us to trigger from App Engine and the slow startup/cleanup times. It might be different now but we still run that our pipelines (doing data munging for ML) locally.–T.OkaharaSep 18, 2018 at 18:05Add a comment|
We currently have a Python Apache Beam pipeline working and able to be run locally. We are now in the process of having the pipeline run on Google Cloud Dataflow and be fully automated but have a found a limitation in Dataflow/Apache Beam's pipeline monitoring.Currently,Cloud Dataflowhas two ways of monitoring your pipeline(s) status, either through their UI interface or through gcloud in the command line. Both of these solutions do not work great for a fully automated solution where we can account for loss-less file processing.Looking at Apache Beam's github they have a file,internal/apiclient.pythat shows there is a function used to get the status of a job,get_job.The one instance that we have found get_job used is inrunners/dataflow_runner.py.The end goal is to use this API to get the status of a job or several jobs that we automatically trigger to run to ensure they are all eventually processed successfully through the pipeline.Can anyone explain to us how this API can be used after we run our pipeline (p.run())? We do not understand whererunnerinresponse = runner.dataflow_client.get_job(job_id)comes from.If someone could provide a larger understanding of how we can access this API call while setting up / running our pipeline that would be great!
Python Apache Beam Pipeline Status API Call
You need to use theluigi.contrib.hadoop_jarpackage (code).In particular, you need to extendHadoopJarJobTask. For example, like that:from luigi.contrib.hadoop_jar import HadoopJarJobTask from luigi.contrib.hdfs.target import HdfsTarget class TextExtractorTask(HadoopJarJobTask): def output(self): return HdfsTarget('data/processed/') def jar(self): return 'jobfile.jar' def main(self): return 'com.ololo.HadoopJob' def args(self): return ['--param1', '1', '--param2', '2']You can also include building a jar file with maven to the workflow:import luigi from luigi.contrib.hadoop_jar import HadoopJarJobTask from luigi.contrib.hdfs.target import HdfsTarget from luigi.file import LocalTarget import subprocess import os class BuildJobTask(luigi.Task): def output(self): return LocalTarget('target/jobfile.jar') def run(self): subprocess.call(['mvn', 'clean', 'package', '-DskipTests']) class YourHadoopTask(HadoopJarJobTask): def output(self): return HdfsTarget('data/processed/') def jar(self): return self.input().fn def main(self): return 'com.ololo.HadoopJob' def args(self): return ['--param1', '1', '--param2', '2'] def requires(self): return BuildJobTask()ShareFolloweditedNov 3, 2015 at 16:07answeredNov 3, 2015 at 12:03Alexey GrigorevAlexey Grigorev2,4252828 silver badges4747 bronze badgesAdd a comment|
I need to run a Hadoop jar job usingLuigifrom python. I searched and found examples of writing mapper and reducer in Luigi but nothing to directly run a Hadoop jar.I need to run a Hadoop jar compiled directly. How can I do it?
Running Hadoop jar using Luigi python
$_is an automatic variable that only has meaning within a script block that executes on each item in the pipeline. The first example works because when you pipeline to ForEach-Object,$_is defined for the script block. The second example does not work because there is no script block so$_is undefined. AFAIK there is no way to do this without a foreach. You need something to compute a value for each item in the pipeline and Add-Member does not accept a script block to compute a value for the members it attaches. I guess you use a ScriptProperty like so:$ftpFiles = Get-FTPChildItem -Session $Session -Path "/myroot" -Recurse | Add-Member -type ScriptProperty -name CompareFullName -value {$this.FullName -Replace "ftp://ftp.server.com/myroot/", ""} -PassThrubut this is semantically different than what you have, since it computes the property value every time it accessed.Depending on what you are trying to do, you could use Select-Object to pull off the useful properties for use later:$ftpFiles = Get-FTPChildItem -Session $Session -Path "/myroot" -Recurse | select *, @{n="CompareFullName"; e={$_.FullName -replace "ftp://ftp.server.com/myroot/", ""}}This would produce new custom objects with the same properties, and an additional property, 'CompareFullName'.ShareFolloweditedMay 5, 2015 at 6:37David Gardiner17k2020 gold badges8282 silver badges121121 bronze badgesansweredSep 4, 2014 at 14:25Mike ZborayMike Zboray40.2k33 gold badges9696 silver badges127127 bronze badgesAdd a comment|
I'm novice with powershell and do not understand why this process works:$ftpFiles = Get-FTPChildItem -Session $Session -Path "/myroot" -Recurse | ForEach-Object { $_ | Add-Member -type NoteProperty -name CompareFullName -value ($_.FullName -Replace "ftp://ftp.server.com/myroot/", "") -PassThru }And this does not work:$ftpFiles = Get-FTPChildItem -Session $Session -Path "/myroot" -Recurse | Add-Member -type NoteProperty -name CompareFullName -value ($_.FullName -Replace "ftp://ftp.server.com/myroot/", "") -PassThruI try to add a property (CompareFullName) to the file object with value that uses another property of the same file object (FullName). Add-Member was supposed to accept piped values. what happens in the non-working syntax is that the property is added alright but the value is null. The first syntax works OK. I would appreciate an explanation or another way to achieve my goal without using foreach-object.
powershell add-member using pipeline and using $this property for value of new property
Just an idea. You can add something at the end of the build script, then when the build is finished, you will get the thing you added.ShareFollowansweredDec 18, 2013 at 10:47Siguang ChenSiguang Chen4388 bronze badges2Thank you very much for your help. This might be a good idea. Anyway, in the end I finished my internship so I don't know what happened next to my pipeline. I think my boss is now using it :-)–sc1789Dec 21, 2013 at 10:54One easy way is to send a email to notice you that this build is finished.–Siguang ChenDec 26, 2013 at 12:22Add a comment|
I've been searching over the web from yesterday about this and can't find a proper answer so I was wondering if there is someone over here that might help me to answer to my problem also to say me it's impossible at the moment to retrieve this information.As the title I wrote, I have a pipeline in Jenkins which connects 3-4 jobs and everything runs perfectly and sequentially. It is made like this, just to be clear: JOB1 -> JOB2 -> JOB3.All I want to know, and I can't find, is if there is a way to check the build pipeline status itself. Do Jenkins maintains this information? Like... I would be able to know when the pipeline is finished: Pseudo: if pipeline is finished then do something ... end
Jenkins Build Pipeline final status
http://www.opengl.org/documentation/specs/version1.1/state.pdfA bit outdated, but still quite valuable imo.ShareFollowansweredMay 5, 2010 at 13:27eileeile1,15366 silver badges1818 bronze badges2Better than nothing. I don't think this topic is going to generate more answers any time soon, so I will accept your answer as something new to me. Thanks a bunch.:]–Xavier HoMay 5, 2010 at 14:053Do yourself a favor and don't look at this if you are working with OpenGL 2.0 or newer. While it is still basically correct, the way things should be done has changed a lot since these early days. Use the D3D charts instead. They call things by different names, but the flow is the same as in OpenGL.–Malte ClasenMay 5, 2010 at 21:33Add a comment|
First of all, I need to clarify that I'm not after theOpenGL Quick Reference Sheet.Has anyone come by a flow-chart-like cheat sheet for OpenGL rendering pipeline? I'm talking about the sheets like these:If not, what's the closest I can get aside from the official quick reference sheet?
OpenGL Cheat Sheet?
use the instance object provided (docs). You can use the python object "instance" like shown in the link to basically access everything, which you could also acces from dagit. ( I have not excessively used it, but mybe the linke helps you.Using Dagster GraphQL api (documentation). Nothing done with it yet, so I can only point you to the linkUse partitions (docs). You can use partitions to trigger executing a job for each partition. I am using this often, because it gives you really got control of what assets were materialized with which partitions (parameters). The link leads to an example of a dynamic partition, which is discovered by a sensor, which then executes a job for each partition and also keeps track of the partitions in dagit.ShareFolloweditedNov 9, 2023 at 2:57STerliakov5,85133 gold badges1717 silver badges4141 bronze badgesansweredFeb 23, 2023 at 11:01TomaferTomafer2111 bronze badgeAdd a comment|
I have been looking all over for the answer, but can't seem to find what I'm looking forI want to create an api endpoint that can pass information to the dagster assets and trigger a run. For example, I have the following asset in dagster@asset def player_already_registered(player_name: str): q = text( f''' SELECT COUNT(*) FROM `player_account_info` WHERE summonerName = :player_name ''' ) result = database.conn.execute(q, player_name=player_name).fetchone()[0] return bool(result)Say that I have an endpoint already made where I can pass theplayer_namevia a get-parameter. How can I pass the parameter to the asset and then run the job itself?
dagster can you trigger a job to run via an api?
UnlikeGoogle Playcontains a set of deployment tasks which allow you to automate the release, promotion and rollout of app updates to the Google Play store from your CI environment.We do not have any build-in task or extension in TFS/Azure DevOps which handle the upload process to the Chrome Web Store.You need to customize by yourself. Since the artifacts are generated through build, you can publish/deploy to chrome store byusing release:Create a newrelease definitionLink that build as release artifactsWrite a scriptusing the Chrome Web Store Publish APIprogrammatically creating, updating, and publishing items in the Chrome Web StoreAdd powershell task to invoke the scriptOther settingsStart/trigger releaseShareFollowansweredNov 19, 2019 at 2:30PatrickLu-MSFTPatrickLu-MSFT50.3k55 gold badges3838 silver badges6565 bronze badgesAdd a comment|
I want to build a pipeline to automate the upload to the chrome web store after every successful build from TFS. Any leads would be really helpful.
How can I automate the upload of a successful TFS build to the Chrome Web Store?
I would try logging all network requests to see what's going on there:// Log and continue all network requests page.route('**', (route, request) => { console.log(request.url()); route.continue(); });If it never gets beyond thegoto(), perhaps try using the new playwright requestfixtureto see if it can even reachhttp://localhost:3000/main.await page.request.get("http://localhost:3000/main", { ignoreHTTPSErrors: true });ShareFollowansweredDec 22, 2021 at 18:49Nico MeeNico Mee98555 silver badges66 bronze badges22So the problem was that when i ran the test localy, i was connected to the backend thanks to VPN. However CI/CD doesnt have any way to connect to BE and the page never gets from login to main page.–Tomáš HrubýDec 27, 2021 at 11:22@TomášHrubý did you solve this? i'm currently having this issue–DidsApr 21, 2023 at 7:12Add a comment|
I am trying to run this code:test('should login', async ({ page }) => { await page.goto(localhost); await page.fill('[name=username]', 'username'); await page.fill('[name=password]', 'password'); await page.click('[name=login]'); await page.waitForURL(`${localhost}/main`); const currentUrl = await page.url(); expect(currentUrl).toBe(`${localhost}/main`); });When I run it withnpx playwright testlocaly, the test passes; but, when run in CI/CD, it fails:Timeout of 180000ms exceeded. page.waitForURL: Navigation failed because page was closed! =========================== logs =========================== waiting for navigation to "http://localhost:3000/main" until "load" ============================================================Any idea what causes this problem?
Playwright test works fine localy, but fails in pipeline
The value that is fetched from the memory, is written to the register file in the write-back stage of the pipeline. Writes to the register file happen in the first half of the clock cycle, while reads from the register file happen in the second half of the clock cycle.The value that is written to the register file can thus be read in the same clock cycle as it is written to the register file. Thusforwarding is not effective here.As for the number of stalls needed, you need to insert two bubbles into the pipeline, as thelwinstruction should be in the write back stage when thebeqinstruction is in the decode stage.I hope this answers your question.ShareFollowansweredJan 24, 2013 at 9:45Janus De BondtJanus De Bondt37344 silver badges2020 bronze badgesAdd a comment|
How many stalls do I need to execute the following instructions properly. I am a little confused with what I did, so I am here to see experts answers.lw $1,0($2);beq $1,$2,Label;Note that the check whether the branch will occur or not will be done in decoding stage. But the source registerrsof beq which is$1in this case will be updated after writeback stage of lw instruction. So do we need to forward new data from Memory in memory stage to Decoding stage of beq instruction.Here is the data path diagram:
Stalling or bubble in MIPS
Why should they go through the pipeline if there are no subscribers? If one of your intermediate steps is useful for their side-effects (You want them to run even if there are no other subscribers), you should rewrite the side-effect operation to be a subscriber.You could also make the step with a side-effect as a pass-through operation (or tee, if you will) if you wanted to continue the chain.ShareFollowansweredFeb 15, 2010 at 4:31Gideon EngelberthGideon Engelberth6,10511 gold badge2222 silver badges2222 bronze badgesAdd a comment|
Currently, I'm using the RX Framework to implement a workflow-like message handling pipeline. Essentially I have a message producer (deserializes network messages and calls OnNext() on a Subject) and I have several consumers.NOTE: If and transform are extension methods I have coded that simply return an IObservable.A consumer does something like the following:var commerceRequest = messages.Transform(x => GetSomethingFromDatabase(x) .Where(y => y.Value > 5) .Select(y => y.ComplexObject) .If(z => z.IsPaid, respond(z)) .Do(z => SendError(z));commerceRequestis then consumed by another similar pipeline and this continues up until the top where it ends with someone callingSubscribe()on the final pipeline. The issue I'm having is that the messages from the base don't propagate up unless subscribe is called on messages directly somewhere.How can I push the messages up to the top of the stack? I know this is an unorthodox approach but I feel it makes the code very simple to understand what is occurring to a message. Can anyone suggest another way of doing the same if you feel this is a totally terrible idea?
RX IObservable as a Pipeline
In the yaml pipelines you have the concept ofEnvironments. In your yaml release pipeline, you can use thedeployment joband target a particular environment in each of the stages: dev, test, etc.If you like to know which pipeline run has been recently deployed to a particular environment you can just opendeployment historysection of the environment.You can design the environments in many ways for example single environment per all applications like dev, test, then you have a single overview of recently deployed applications on a particular environment.ORYou can granulate environments and create a dedicated per each application something like App1-dev, App2-dev, App1-test, App2-test, etc.ShareFollowansweredAug 25, 2023 at 18:09fenrirfenrir35311 silver badge1010 bronze badgesAdd a comment|
When moving from the DevOps Releases way of deploying our applications to a YAML format, we noticed that we have no way of actually knowing the currently running release. We can see the most recently run pipeline, but if someone reruns a stage in an older pipeline it wouldn't be visisble in the list of pipelines. Contrary to the classic Release way of doing it, we could actually see the release that is running.This image shows how the pipeline is displayed in the list of runs now:Screen capture of what it looks like in Pipelines with YAMLThis is the old way:Screen capture of what it looks like in the old Releases wayIn the second image (Releases) it's much clearer to see what is most recently deployed.Since it highlights the stages that have been run most recently. Even though the Release in the top also has completed the stages to Dev and Testing, it shows that the second Release is the one that's actually running on the respective environments. One could think that this isn't an issue since Pipelines always display in the most recent run order,but if you rerun a stage in a previous pipeline run, it won't move that run to the top.(Hence making it impossible to see from a glance what deployment is currently running)Is this possible to achieve in some way using YAML pipelines in DevOps?
Show most recent deployment or rerun, like in the classic release UI
Yes you could define gitlab ci variable of file typehere is the doc for gitlab ce :https://docs.gitlab.com/ee/ci/variables/#use-file-type-cicd-variablesIn the settings of your project, you will need to define a variable (file type).Then gitlab will create a temporary file for this variable with your file content.You could use it directly or usecpcommand like this:cp $MY_SECRET_FILE $HOME/.m2/settings.xmledit: binary files seems not supported, so follow and upvote following issue :https://gitlab.com/gitlab-org/gitlab/-/issues/205379ShareFolloweditedFeb 20, 2023 at 1:32AndrewHarvey2,9321111 silver badges2222 bronze badgesansweredJul 23, 2020 at 17:08boly38boly381,8752424 silver badges3131 bronze badges2Thank you for your answer . the problem is that i can't have the files (jks or .p12) content as text or json to put it in the variable type "file" .. what i want is that when i chose type variable "file" in gitlab-ci it will show me an upload button to upload my file .. so i need to convert my files into text to get the content ... it's insane ....–maherJul 25, 2020 at 9:371thank you ! i juste replace .p12 file with json file so i can get the content and convert .jks to base64 also .–maherJul 25, 2020 at 19:01Add a comment|
I'm working on a gitlab-ci pipeline to automate build, sign "apk" and deploy to the play store. The pipeline work fine but the two files ".jks" to sign the "apk" and the ".p12" for my Google cloud platform service are now in my repository and it's not a secure way to do it. What I want to do is to put this two files ".jks and .p12" as a gitlab-ci variable to avoid putting this files in my repo...
How to put a file (.jks, .p12) as variable?
I think, the problem has to be withW_trainbecause find my example below with your code, which works pretty fine.from sklearn.preprocessing import StandardScaler from sklearn.ensemble import RandomForestRegressor from sklearn.pipeline import make_pipeline from sklearn.model_selection import GridSearchCV # creating pipeline pipeline = make_pipeline(StandardScaler(), RandomForestRegressor(n_estimators=100)) from sklearn.datasets import load_diabetes X, y = load_diabetes(return_X_y=True) hyperparameters = {'randomforestregressor__max_features': ['auto'], 'randomforestregressor__max_depth': [None] } clf = GridSearchCV(pipeline, hyperparameters, cv=10, verbose=10) clf.fit(X , y, **{'randomforestregressor__sample_weight': np.random.choice([0,2,3,5],size=len(X))}) # Fitting 10 folds for each of 1 candidates, totalling 10 fits [CV] randomforestregressor__max_depth=None, randomforestregressor__max_features=auto [CV] randomforestregressor__max_depth=None, randomforestregressor__max_features=auto, score=0.385, total= 0.2s ...ShareFollowansweredJul 11, 2019 at 14:27VenkatachalamVenkatachalam16.6k99 gold badges5050 silver badges7777 bronze badgesAdd a comment|
I am using a Scikit-Learn code from others to build a prediction tool. The original code works just fine but I need to addsample_weightto the prediction tool.Having searched for solutions in different documentation, I found that the major issue is that pipeline in Scikit-Learn does not supportsample_weightvery well.# creating pipeline pipeline = make_pipeline(preprocessing.StandardScaler(), RandomForestRegressor(n_estimators=100)) hyperparameters = {'randomforestregressor__max_features': ['auto'], 'randomforestregressor__max_depth': [None] } clf = GridSearchCV(pipeline, hyperparameters, cv=10, verbose=10) clf.fit(X_train, Y_train # , fit_params={'sample_weight': W_train} # , fit_params={'sample_weight':W_train} # , **{'randomforestregressor__sample_weight': W_train} ) # testing model pred = clf.predict(X_test) r2_score(Y_test, pred) mean_squared_error(Y_test, pred) print(r2_score(Y_test, pred)) print(mean_squared_error(Y_test, pred)) # 保存模型以便将来使用 joblib.dump(clf, 'rf_regressor.pkl')I've tried to insertsample_weightin different locations, but it all shows failure. Can anyone help tell me where to insert thesample_weightwithpipelinein place, OR realize the steps (includingsample_weight) without usingpipeline?
scikit-learn: fit() can't enable sample_weight when pipeline is involved
Do you mean, you want to create a kind of button, using which you will be able to create link between two variables in pipeline ? If yes, Then I think that is not possible.ShareFollowansweredNov 1, 2014 at 20:32AdhirajAdhiraj8111 silver badge66 bronze badges11the button outlined in red in the image links two items in the pipeline. I am wondering if I can create a keyboard shortcut to do this function instead of having to click the button. (There was a keyboard shortcut in the previous software AG developer)–thebumblecatNov 3, 2014 at 16:24Add a comment|
I want to create a shortcut for creating a link between two variables in my pipeline view.Can someone tell me how (or if it is possible) to create a shortcut to this button? When I look inPreferences > General > Keys, I can not find this function.I am using Software AG webMethods Designer 8.2
Custom Eclipse Shortcuts Within Views
OK, so I finally got it working by replacing Rhino with Node.js (I'm required to use jRuby so therubyracer was apparently not a good option either).==============================================UPDATE: It's been quite a while and I had pretty much forgotten about this post, but now I have come across it again I feel there's something I should mention: Although my original solution of using Node.js worked, it turned out to be more of an inadverted workaround than anything else.As I found out some time later, the source of the problem was that, due to my lack of experience with Linux, I was pretty sure that the locale was set up properly when it wasn't at all. Once I configured my locale correctly, it fixed this and quite a few other locale-related problems I was experiencing.ShareFolloweditedOct 31, 2013 at 17:48answeredFeb 13, 2013 at 20:38laffinkippahlaffinkippah8377 bronze badgesAdd a comment|
I'm writing a rails application in Spanish and I'm having trouble displaying accented characters from JavaScript.Everything works fine in development, but in production, in the unified/public/assets/application[*fingerprint*].jsfile, all my special Spanish characters get converted to question marks. I have triple checked that my .js files are indeed in UTF-8, and have also tried changing the extension to .js.erb and putting<%# encoding: utf-8 %>at the top of the files, but still no joy.I created a new, simple application from scratch just to test this and the problem persists. I've even tried disabling the uglifier gem, just in case, and that didn't work either. My main suspect now is Sprockets, but can't find any information of this happening to anyone else. Has anyone encountered a similar problem?
Rails 3.2.11 asset pipeline: accented characters in javascript get converted to question marks in production
Which version of IIS do you have ? If you use IIS 7, make sure you have the application pool type set to integrated and not classic. The integrated pipeline mode is IIS 7 specific.ShareFollowansweredMay 16, 2013 at 8:15ToXinEToXinE31822 silver badges1414 bronze badgesAdd a comment|
I've recently installed VS 2012 and .net Framework 4.5, and everything is mostly ok, except that I occasionally get the error: This operation requires IIS integrated pipeline mode.I of course have Managed pipeline mode: Integrated in IIS.protected override void OnLoad(EventArgs e) { var st = new StackTrace(true); string message = String.Format("Redirect to url: {0}, Stack Trace:\r\n{1}", url, st); Trace.TraceInformation(message); } protected void Application_Start(Object sender, EventArgs e) { Trace.Listeners.Add(new OurAspTraceListener(Context)); }And the Custom Trace listener is pretty simple.private class OurAspTraceListener : TraceListener { private readonly HttpContext _context; public OurAspTraceListener(HttpContext context) { _context = context; _context.Trace.IsEnabled = true; } public override void Write(string message) { _context.Trace.Write(message); // it's throwing here. } public override void WriteLine(string message) { _context.Trace.Write(message); } }It's really weird because if I just hit refresh it continues without any problem.Any help would is appreciated, Thanks.
Since installing VS 2012, I'm getting intermittent PlatformNotSupportedException
Please adddependsOn: []in the iOS_QA_Build stage.My example:stages: - stage: DEV jobs: - job: A steps: - bash: echo "A" - stage: QA dependsOn: [] jobs: - job: A steps: - bash: echo "A"When you define multiple stages in a pipeline, by default, they run sequentially in the order in which you define them in the YAML file.dependsOn: []removes the implicit dependency on previous stage and causes this to run in parallel.For more details, please referSpecify dependencies.ShareFollowansweredAug 25, 2022 at 6:14Miao Tian-MSFTMiao Tian-MSFT2,21911 gold badge44 silver badges1414 bronze badges0Add a comment|
Is it possible to run the first two stages parallelly in Azure DevOps pipelines? By default, Each stage starts only after the preceding stage is complete unless otherwise specified via the dependsOn property.Current situation is:I would like to run both the stages, iOS_Dev_Build and iOS_QA_Build parallelly. No dependsOn condition is added for iOS_QA_Build. But by default, it is waiting for the iOS_Dev_Build stage to complete before it starts
Run first two stages parallelly in Azure DevOps pipeline
Your attempt fails because special symbols lose their syntactical value after expansion.evalis a way, but as we know it is prone to code injection so it is best avoided. We can trycmd1 | if [ condition ]; then cmd2 | cmd3 else cat fi | cmd4Though I'd love to skipcat, I couldn't find a way. Real example:echo XYZABC | if true; then tr Y B | tr Z C else cat fi | tr X AoutputsABCABCand changing the condition tofalseoutputsAYZABC.ShareFollowansweredJan 18, 2021 at 22:37QuasímodoQuasímodo3,8801414 silver badges2626 bronze badges6There's nothing wrong with havingcatthere. One can obviate the overhead by enabling loadable built-inteeand using it instead if that's an issue.–oguz ismailJan 19, 2021 at 5:38I had considered usingcat. But - won't that incur a significant performance penalty for large files?–einpoklumJan 19, 2021 at 7:53@oguzismail: What is a "loadable built-in tee"?–einpoklumJan 19, 2021 at 7:53@einpoklum Bash comes with various loadable builtins,teeis one of them. On my system they are stored in/usr/lib/bashand I can doenable -f /usr/lib/bash/tee teeand haveteeas a builtin command.–oguz ismailJan 19, 2021 at 8:08@oguzismail: Oh, that's a nice idea then. Can you make it into an answer?–einpoklumJan 19, 2021 at 8:25|Show1more comment
I need to form a pipeline of various commands. Some elements of the pipeline, or sequences of elements, are only relevant when some condition holds. Now, I could write:if [[ $whatever ]]; then cmd1 | cmd2 | cmd3 | cmd4 else cmd1 | cmd4 fibut that means repeatingcmd1andcmd4, plus, there may be several conditions and I don't want to write nested if's. So, I tried writing this:if [[ $whatever ]]; then pipeline_segment="| cmd2 | cmd3" else pipeline_segment="" fi cmd1 ${pipeline_segment} | cmd4but - the pipe symbol was not interpreted as an instruction to use a pipe.How do I have bash execute the pipeline I want it too?Note: You may assume a bash version of 4 or higher, but only if you must.
How to conditionally add a pipeline element in bash
The/operator does not work the way you have assumed. You just need to be a bit more explicit and change the last line in error2 tofun t -> t/2.0and then it should all work.The answers being out by a factor of 4 was the giveaway here.EDIT:To understand what happens with/here consider what happens when you expand out|>The following are all equivalenta |> (/) b ((/) b) a //by removing |> a / b //what happens when / is reinterpreted as a functionShareFolloweditedSep 24, 2013 at 0:26answeredSep 24, 2013 at 0:03John PalmerJohn Palmer25.4k33 gold badges4848 silver badges6767 bronze badges2would you like to say it will be interpreted asb/ainstead ofa/b?–Dzmitry MartavoiSep 24, 2013 at 8:213last line should be "b / a" imo. i'm sure |> (op) x is a huge source for bugs in f# code ;)–stmaxSep 24, 2013 at 13:38Add a comment|
have a code://e = 1/2*Sum((yi -di)^2) let error y d = let map = Array.map2 (fun y d -> (y - d) ** 2.0) y d let sum = Array.sum map (sum / 2.0) let error2 y d = Array.map2 (fun y d -> (y - d) ** 2.0) y d |> Array.sum |> (/) 2.0as i understood those functions should produce the same results, but there are a big difference in the results. Can anyone explain this?p.s. Simplified example:let test = [|1..10|] let res = test |> Array.sum |> (/) 5i expect test = 11 (sum(1..10) = 55 and then 55 / 5) but after Array.sum pipeline is not working as i want(as result test = 0).
How does F# pipeline operator work
I have good news!!Our friends at GitLab have been working on this feature. There is now a way to label your pipeline in release 15.5.1-ee.0!It uses theworkflowcontrol with a new keywordnameworkflow: name: 'Pipeline for branch: $CI_COMMIT_BRANCH'You can even use theworkflow:rulespair to have different names for you pipeline:variables: PIPELINE_NAME: 'Default pipeline name' workflow: name: '$PIPELINE_NAME' rules: - if: '$CI_PIPELINE_SOURCE == "merge_request_event"' variables: PIPELINE_NAME: 'MR pipeline: $CI_COMMIT_BRANCH' - if: '$CI_MERGE_REQUEST_LABELS =~ /pipeline:run-in-ruby3/' variables: PIPELINE_NAME: 'Ruby 3 pipeline'Find the docs here:https://docs.gitlab.com/ee/ci/yaml/#workflowThis feature is disabled by default in 15.5 because it is so new.You can enable the feature flag, which is namedpipeline_name.See this link to enable:https://docs.gitlab.com/ee/administration/feature_flags.html(You need to use the Rails Console to enable it. Pretty easy.)Note: Remember that theworkflowkeyword affects the entire pipeline instance.ShareFolloweditedOct 29, 2022 at 15:28answeredOct 29, 2022 at 15:23Traveler_3994Traveler_399418722 silver badges1212 bronze badges1This was exactly what i needed. thanks!!–Kim HollandFeb 6 at 22:30Add a comment|
How do I add a label to the GitLab pipelines when they run?This would be extremely helpful when you run a few nightly (scheduled) pipelines for different configurations on the main branch. For example, we run a nightly main branch with several submodules, each set at a point in their development (a commit point SHA) and I want to label that 'MAIN'. We run a second pipeline that I want to label 'HEADs', which is a result of pulling all of the HEAD's of the submodule to see if changes will break the main trunk when they are merged in.Currently it shows:Last commit message.Pipeline #commit SHABranch name'Scheduled'That is helpful, but it is very difficult to tell them apart because only the pipeline # changes between the pipelines.
How do I label pipelines in GitLab?
This answer is for people with multibranch pipeline projects. You might need the Basic Branch Build Strategies plugin if you don't have it already, but it may already be part of your setup.I added theChange Requestsbuild strategy and it worked. Go toBranch Sources, underBuild Strategies, clickAddand add theChange requestsconfiguration.Reference:https://issues.jenkins.io/browse/JENKINS-54864ShareFollowansweredJan 15, 2021 at 7:19KajalKajal5911212 silver badges2525 bronze badges21Does this still work in a current Jenkins? I am searching for this option but I am not able to find "Change requests"–ApolloJan 30, 2023 at 7:001It does work for multibranch pipelines, maybe you are missing the Basic Branch Build Strategies plugin I mentioned?–KajalFeb 28, 2023 at 22:46Add a comment|
I've read the answer fromtrigger jenkins build on tag creation with multibranch pipelineThe error is similar, only difference is that questions is fortagwhich seems due to not supported at the time of posting.I encountered this error onbranches, I have forked the repo, so I have anupstreamand anorigin, I pushed to both, and it shows the branch is there, but whenever I push any code, it just won't trigger.Here is the settings: Type: Github Enterprise
‘Jenkinsfile’ found, Met criteria, No automatic build triggered for jenkins, for git branch
Assuming you want a single redis call for set ops:pipe = redis_con.pipeline() for i in range(0,len(keys)): pipe.set(keys[i], vals[i]) pipe.execute()ShareFollowansweredFeb 16, 2017 at 18:06SoorajSooraj42244 silver badges1111 bronze badgesAdd a comment|
I have two lists keys= [k0,k1, ....kn] vals= [v0,v1, ....vn]I can set these key-values on redis in multiple steps doing the following:for i in range(0,len(keys)): redis_con.set(keys[i], vals[i])But this is multiple set operations. How can I do this in one async step?
Python, redis: How do I set multiple key-value pairs at once
Your code is doing well. I think you only need a$matchstage in the last of your pipeline.db.items.aggregate([ { $match: { $and: [ { "status": "active" }, { "name": { $exists: true } } ] } }, { $lookup: { as: "info", from: "inventory", let: { fruitId: "$id" }, pipeline: [ { $match: { $and: [ { $expr: { $eq: [ "$item_id", "$$fruitId" ] } }, { "branch": { $eq: "main" } }, { "branch": { $exists: true } } ] } } ] } }, { "$match": { "info": { "$ne": [] } } } ])mongoplaygroundShareFollowansweredFeb 24, 2022 at 3:46YuTingYuTing6,61122 gold badges66 silver badges1717 bronze badges0Add a comment|
I have these 2 simple collections:items:{ "id" : "111", "name" : "apple", "status" : "active" } { "id" : "222", "name" : "banana", "status" : "active" }inventory:{ "item_id" : "111", "qty" : 3, "branch" : "main" } { "item_id" : "222", "qty" : 3 }Now I want to to only return the items with "status" == "active" and with "branch" that exist and is equal to "main" in the inventory collection. I have this code below but it returns all documents, with the second document having an empty "info" array.db.getCollection('items') .aggregate([ {$match:{$and:[ {"status":'active'}, {"name":{$exists:true}} ] }}, {$lookup:{ as:"info", from:"inventory", let:{fruitId:"$id"}, pipeline:[ {$match:{ $and:[ {$expr:{$eq:["$item_id","$$fruitId"]}}, {"branch":{$eq:"main"}}, {"branch":{$exists:true}} ] } } ] }} ])Can anyone give me an idea on how to fix this?
Aggregate Lookup with pipeline and match not working mongodb
So I finally got this working, and it shall be preserved here for posterity.Start with a generator, here namediteratorbecause I'm currently too afraid to change anything for fear of it breaking again:def path_iterator(paths): for p in paths: print("yielding") yield p.open("r").read(25)Get an iterator, generator, or list of paths:my_files = Path("/data/train").glob("*.txt")This gets wrapped in our ...functionfrom above, and passed tonlp.pipe. In goes a generator, out comes a generator. Thebatch_size=5is required here, or it will fall back into the bad habit of first reading all the files:doc = nlp.pipe(path_iterator(my_paths), batch_size=5)The important part, and reason why we're doing all this, is thatuntil now nothing has happened. We're not waiting for a thousand files to be processed or anything. That happens onlyon demand, when you start reading fromdocs:for d in doc: print("A document!")You will see alternating blocks of five (our batch_size, above) "Yielding" and "A document". It's an actual pipeline now, and data starts coming out very soon after starting it.And while I'm currently running a version one minor tick too old for this, the coup de grace is multi-processing:# For those with these new AMD CPUs with hundreds of cores doc = nlp.pipe(path_iterator(my_paths), batch_size=5, n_process=64)ShareFollowansweredFeb 11, 2020 at 14:02Matthias WinkelmannMatthias Winkelmann16.1k88 gold badges6666 silver badges7676 bronze badgesAdd a comment|
All the examples that I see for using spacy just read in a single text file (that is small in size). How does one load a corpus of text files into spacy?I can do this with textacy by pickling all the text in the corpus:docs = textacy.io.spacy.read_spacy_docs('E:/spacy/DICKENS/dick.pkl', lang='en') for doc in docs: print(doc)But I am not clear as to how to use this generator object (docs) for further analysis.Also, I would rather use spacy, not textacy.spacy also fails to read in a single file that is large (~ 2000000 characters).Any help is appreciated...Ravi
read corpus of text files in spacy
Try this example:param_grid = [ {'penalty': ['l1'], 'solver': [ 'lbfgs', 'liblinear', 'sag', 'saga']}, {'penalty': ['l2'], 'solver': ['newton-cg']}, ]herel1will be tried with'lbfgs', 'liblinear', 'sag', 'saga'andl2will be tried with only'newton-cg'ShareFollowansweredMay 20, 2018 at 12:52Pratik KumarPratik Kumar2,23111 gold badge1717 silver badges4141 bronze badges21Solver lbfgs supports only 'l2'–ValentinFeb 17, 2021 at 15:44Solver lbfgs supports only 'l2' and 'none'–AviJun 21, 2022 at 15:42Add a comment|
just wondering how to separate parameters into a group and pass it to gridsearch? As i want to pass penalty l1 and l2 to grid search and corresponding solver newton-cg to L2.However, when i run the code below, the gridsearch will first run l1 with newton-cg and result in error msg ValueError: Solver newton-cg supports only l2 penalties, got l1 penalty.Thanksparam_grid = [ {'penalty':['l1','l2'] , 'solver' : ['newton-cg', 'lbfgs', 'liblinear', 'sag', 'saga'] } ]
sklearn logistic regression parameter in GridSearch
You can try and useexit 1command to have the bash task failed. And it is often a command you'll issue soon after an error is logged.Additionally, you also may uselogging commandsto customized a error message. Kindly refer to the sample below.#!/bin/bash echo "##vso[task.logissue type=error]Something went very wrong." exit 1ShareFollowansweredJul 12, 2022 at 8:20Alvin ZhaoAlvin Zhao2,23511 gold badge33 silver badges88 bronze badgesAdd a comment|
I am using Azure Devops pipeline and in that there is one task that will create KVM guest VM and once VM is created through packer inside the host it will run a bash script to check the status of services running inside the guest VM. If any services are not running or thrown error then this bash script will exit with code 3 as i have added the value in bash script as belowset -eSo i want the task to fail if the above bash script fails, but issue is in the same task as KVM guest VM is getting created so while booting up and shutdown it throws expected errors but i dont want this task to fail due these error but to fail it only bash scripts fails.i have selected the option in task "Fail on Standard Error" But not sure how we can fail the task specifically for bash script error, can anyone have some suggestions on this?
How to fail Azure devops pipeline task specifically for failures in bash script
Jenkins is a build factory. In other words, its primary use is to run tasks that dedicated to build, integrate and deliver applications. It's a typicalDEVOPStool.Jenkins can be used to build pipelines (sequences of tasks) or to be called from a pipeline (to execute one of the pipeline's tasks).The great thing about Jenkins is that it integrates nicely with other devops tools:SCM: SVN, Github, GitlabBuild: maven, gradleTest: Cucumber reportsQuality: SonarQubeDeployment: Octopus Deploy, XL Deploy, Run Deck...You name it!However, Jenkins is generally not used to "code" and "operated" applications.A typical pipeline would be:Try Pull Request => Build Release Candidate => Deploy RC on Integration => Deploy on ProductionThis is a over simplified pipeline, just to give an idea of the scope of this tool. A production grade piepline should include security checks, and integrate nicely with human validation when needed.ShareFollowansweredJan 8, 2019 at 9:28avi.elkharratavi.elkharrat6,43866 gold badges4343 silver badges4949 bronze badgesAdd a comment|
I don't know where could I fit the jenkins tool in the following devops pipeline:code -> integrate -> test -> release -> deploy -> operateMaybe it can be in every steps ?
Where does Jenkins fit in the devops pipeline?
You have to read the data out of the block before it will be completed. Since noöne is readingsaveBlock, it will never be completed.If you don't need the data, the easiest solution is to useActionBlockinstead ofTransformBlock. Otherwise, just keep reading the data until the block is completed.ShareFollowansweredAug 10, 2015 at 9:03LuaanLuaan62.9k77 gold badges101101 silver badges121121 bronze badges1According to the official documentation: *'Signals to the System.Threading.Tasks.Dataflow.IDataflowBlock that it should not accept nor produce any more messages nor consume any more postponed messages.' *However, what you are saying is true, sounds a little bit contradicting–Martin MeeserJan 22, 2019 at 15:20Add a comment|
This code never reaches the last line because the completion doesn't propagate from the saveBlock to the sendBlock. What am I doing wrong?var readGenerateBlock = new TransformBlock<int, int>(n => { Console.WriteLine("Read " + n); Thread.Sleep(15); return n; }); var groupingBlock = new BatchBlock<int>(10); var saveBlock = new TransformManyBlock<int[], int>(n => { Console.WriteLine("Saving {0} items [{1}; {2}]", n.Count(), n.First(), n.Last()); Thread.Sleep(150); return n; }); var sendBlock = new TransformBlock<int, int>(n => { Console.WriteLine("Sending {0}", n); Thread.Sleep(25); return n; }, new ExecutionDataflowBlockOptions { MaxDegreeOfParallelism = 2 }); readGenerateBlock.LinkTo(groupingBlock, new DataflowLinkOptions { PropagateCompletion = true }); groupingBlock.LinkTo(saveBlock, new DataflowLinkOptions { PropagateCompletion = true }); saveBlock.LinkTo(sendBlock, new DataflowLinkOptions { PropagateCompletion = true }); Parallel.For(0, 250, i => readGenerateBlock.Post(i)); readGenerateBlock.Complete(); sendBlock.Completion.Wait(); Console.WriteLine("Completed.");
C# TPL Dataflow - Completion not working
ms per record is extremely high, it should not be difficult to improve upon that, unless you are using some seriously antiquated hardware.Upgrade the hardware. SSD's, RAID striping and PCI express hard disks are designed for this kind of activity.Read the file in larger chunks at a time, reducing I/O waiting times. Perhaps use fread to dump large chunks to memory first.Consider using mmap to map a pointer between hard disk and memory.Most importantly profile your code to see where the delays are. This is notoriously difficult with I/O activity because it differs between machines and it often varies significantly at runtime.You could attempt to add multithreaded parsing, however I strongly suggest you try this as a last resort, and understand that it will likely be the cause of a lot of pain and suffering.ShareFollowansweredDec 23, 2014 at 10:10ChrisWard1000ChrisWard100053655 silver badges1515 bronze badges1Thank you for your many suggestions! See the update on op, I have dramatically reduced the 17ms and now I simply need to experiment with larger data sets to check true I/O performance.–PidgeyBAWKDec 23, 2014 at 13:19Add a comment|
I'm writing a program that involves analyzing CSV files of minimum 0.5GB (and maximum of over 20GB), I read from the CSV as follows withfstream,while (getline(fin,line)) {}, and doing an average of 17millisecs work on each comma separated record. Simple stuff.But, there are a LOT of records. So obviously, the program is I/O bound, butI was wondering whether I could improve the I/O performance. I can't resort to OpenMP as I would deal with CPU constraints, and buffering a file this large won't work either. So I might need some kind of pipeline...I have VERY little experience in multithreading in C++ and have never used dataflow frameworks. Could anyone point me in the right direction?Update (12/23/14) :Thanks for all your comments. You are right, 17ms was a bit much... After doing a LOT of profiling (oh, the pain), I isolated the bottleneck as an iteration over a substring in each record (75 chars). I experimented with#pragmasbut it simply isn't enough work to parallelize. the overhead of the function call was the main gripe - now 5.41μs per record, having shifted a big block. It's ugly, but faster.Thanks @ChrisWard1000 for your suggestions. Unfortunately I do not much have control over the hardware I'm using at the moment, but will profile with larger data sets (>20GB CSV) and see how I could introduce mmap/multithreaded parsing etc.
I/O bound performance - speedup?
You can make use of the inline modifier(?i):/(?i)test/
I am trying to use a regex pattern in Webmethods map step. The problem is to ignore the case of matching string using regex modifiers.E.g.:input is 'TEST' or 'test' or 'Test'Branch on 'input' /test/i : MAPBut as I read on different webmethods forums that using access modfiers in Webmethods is a limitation. So, I am unable to use '/i'.Any idea or hint on how I could do it?Thanks in advance.
Regex modifiers in Webmethods
Whoops, I forgot to add aFILES_STOREin the settings. Lookherefor an explanation.Relevant quote:Then, configure the target storage setting to a valid value that will be used for storing the downloaded images. Otherwise the pipeline will remain disabled, even if you include it in the ITEM_PIPELINES setting.
This is my settings.py:from scrapy.log import INFO BOT_NAME = 'images' SPIDER_MODULES = ['images.spiders'] NEWSPIDER_MODULE = 'images.spiders' LOG_LEVEL = INFO ITEM_PIPELINES = { "images.pipelines.WritePipeline": 800 } DOWNLOAD_DELAY = 0.5This is my pipelines.py:from scrapy import Request from scrapy.pipelines.files import FilesPipeline class WritePipeline(FilesPipeline): def get_media_requests(self, item, info): for url in item["file_urls"]: yield Request(url) def item_completed(self, results, item, info): return itemIt is very standard, normal stuff. And yet this is a line of my log:2015-06-25 18:16:41 [scrapy] INFO: Enabled item pipelines:So the pipeline is not enabled. What am I doing wrong here? I've used Scrapy a few times now, and I'm fairly positive the spider is fine. The item is just a normal item withfile_urlsandfiles.
Scrapy does not enable my FilePipeline
function bcd () { Param([parameter(ValueFromPipeline=$true)][Hashtable[]]$table) Begin {$tables= @()} Process {$tables += $table} End {$tables.count} } @{ a = 10 }, @{ b = 20 }, @{ c = 30 } | bcd bcd -table @{ a = 10 }, @{ b = 20 }, @{ c = 30 } 3 3
Here's a function which accepts an array of hashtables via argument:function abc () { Param([Hashtable[]]$tables) $tables.count }Example use:PS C:\> abc -tables @{ a = 10 }, @{ b = 20 }, @{ c = 30 } 3Here's a function which accepts Hashtables via pipeline:function bcd () { Param([parameter(ValueFromPipeline=$true)][Hashtable]$table) $input.count }Example use:PS C:\> @{ a = 10 }, @{ b = 20 }, @{ c = 30 } | bcd 3Is there a way to define function which can accept a hashtable array via argument or pipeline via the same parameter? I.e. a function which can be called in both of the ways shown above. Note that I'll need the entire array of hashtables in a single variable (hence the use of$inputabove inbcd).
Function [Hashtable[]] parameter that can come from pipeline or argument
Most properties of a container are not propagated to views into those containers. The only properties which are are those which are relevant to that container being a range: iterator categories, sizeability, etc. This is in part because logically a view does not have to follow the rules of the container it is a view of.Considerset. Aset's elements are ordered and unique (relative to that order). Aviewof those elements don't have to maintain this property. Atransformview of a set could use a hash-like function that generates the same value from multiple different inputs, violating the unique requirement of theset. It could transform elements such that they don't maintain the ordering relationship (or transform them into elements that cannot be ordered).In your case, toreversea setfundamentallychanges the nature of theset. After all, the ordering function is part of theset's type; it is an inherent part of that container. If you reverse the ordering, you're using a different ordering. At best, it is asetwith a different type.Amapmaps from keys to value. Akey_viewof a map... is just a sequence of keys. The fundamental nature of the container is lost.
With C++23, we get pretty-printing of ranges, currently available in{fmt}. E.g.std::setis formatted with{}whereas sequence containers such asstd::vectorare formatted with[]. Internally, theformatterclass template dispatches on the presence of a nested typedefkey_type(whichstd::setdoes have andstd::vectordoes not).Piping astd::setinto any kind ofviewwill drop thekey_typetypedef and pretty-print it as astd::vector.#include <ranges> #include <set> #include <fmt/ranges.h> int main() { auto s = std::set<int>({ 2, 3, 5, 7 }); fmt::println("{}", s); // nice, formats as { 2, 3, 5, 7 } fmt::println("{}", s | std::views::reverse); // meh, now uses [] instead of {} fmt::println("{}", s | std::views::reverse | std::views::reverse); // and it won't recover the braces }Godbolt linkQuestion: is there any way to preserve the "set-ness" (more generally: the formatting kind) of an input range in a pipeline?
Range pipeline drops typedef used for formatting style