Response
stringlengths
8
2k
Instruction
stringlengths
18
2k
Prompt
stringlengths
14
160
In the first example you are passing the output ofcat fileto the input ofls -l. Sincels -ldoes not take any input, it does not do anything regarding the output ofcat file. However in the second example you are using$(cat file)which puts the output ofcat filein the place of an argument passed tols -l, and this timels -lhas the text insidefilein the right place for doing something with it. The issue here is noticing the difference between the standard input of a program and the arguments of a program. Standard input is what you have when you callscanfin C, for example; and the arguments are what you get in theargvpointer passed as parameter to the main procedure.
This question already has answers here:Unix pipe into ls(3 answers)Closed7 years ago.I am a newbie to linux . I have been learning about cat command when i tried this .harish@harish-Lenovo-G50-45:~$ cat file Videos/Arrow/Season* harish@harish-Lenovo-G50-45:~$ cat file | ls -lThe command displays the content of the current folder instead of the folder mentioned in the file .. But when i did thisharish@harish-Lenovo-G50-45:~$ cat file Videos/Arrow/Season* harish@harish-Lenovo-G50-45:~$ ls -l $(cat file)The contents of the expected folder displays correctly . Why cant i not use the pipeline in this case ?
Pipelining of cat and ls commands [duplicate]
I believe you are installing the node modules usingnpm install, you also should save those module in yourpackage.jsonfile which you can do that bynpm install --save.The recommended best practice would be toSetup a Build Pipeline.There could be 3 stages or more:Build Stage: It builds the app so doing things likenpm installthere so your foldernode_modulesgets created for you.Test Stage: Tests the app so doing things likenpm testwould run all the tests in your appDeploy Stage: Once build and deploy stage runs successfully, Deploy will actually deploy the app to the Bluemix domain.
We're trying to deploy an AngularJS2 application to bluemix but we're missing the folder "node_modules" after the application was deployed to the server. We're using npm to build the application.I found the following post that is mentioning the problem: (https://developer.ibm.com/answers/questions/181207/npm-install-within-subdirectory-not-creating-node.html)My question would now be: what's the recommended best practice?
Missing node_modules when deploying AngularJS2 application to Bluemix
Simple - You've got the wrongparameter name:class_weight: dict, list of dicts, “balanced”, “balanced_subsample” or None, optional
I am searching over parameters in pipeline usingGridSearchCVin Scikit. I made my code work, but if I want to addclass_weights, I am hitting a wall.from sklearn.pipeline import Pipeline RFC = RandomForestClassifier() PCA = PCA() pipe = Pipeline(steps=[('PCA', PCA), ('RFC', RFC)]) param_dict = {'RFC__n_estimators': [100,150], 'RFC__class_weights': [{0:1,1:2},{0:1,1:4}], 'PCA__n_components': [60,80]} from sklearn.grid_search import GridSearchCV estimator = GridSearchCV(pipe, param_dict, scoring='roc_auc') estimator.fit(X_train, y_train)What is the proper way how to add this parameter to GridSearch?
Gridsearch in pipeline
Let's assume you're piping a string through bash:# this is your starting code f() { tr -d [[:digit:]] | tr '[:upper:]' '[:lower:]' | tr -cd '[:alnum:]\nčšž'; } # defining a test variable s='hello123WORLD456'$'\n''čšž' f <<<"$s" # writes "helloworld", a newline, then "čšž"...can trivially be changed by combining first and third, since they're both performing the same basic operation (deleting all characters in a given set -- even if in one of the two cases the set is defined in an exclusionary manner):# this behaves the same way f() { tr '[:upper:]' '[:lower:]' | tr -cd '[:alpha:]\nčšž'; }...however, if running a modern bash release, you can do the same thing with a pair of parameter expansions, without any of the overhead of runningtrat all:s_lowercase=${s,,} s_alpha=${s_lowercase//[![:alpha:]čšž]/} echo "$s_alpha"
I use tr command 3 times sequentially in this case:tr -d [[:digit:]] | tr '[:upper:]' '[:lower:]' | tr -cd '[:alnum:]\nčšž'Is it possible to combine 3 tr commands in 1 tr command? Or is there some way, how to do that faster?
Is it possible to combine more tr commands?
It is not that trivial, but you can do this:Get-ChildItem | foreach {$a = Get-Content $_ Set-Content $_ -Value "hi", $a}Tho to be honest i do think it matches your definition of a pipeline.
It's quite convenient to use Linux shell to append some content to a file, using pipe line operations and stream operations would do this.But in PowerShell, the pipeline is used in object level, not in file level. Then, how can I for example, insert a row "helloworld" to a list of files, to become their first line?
How to insert one row to each file?
Regardless of having Forwarding, if you only have one port to access memory (e.g. unique Data & Instruction memory bus) and to simplify let's say there is no cache in the system (so every memory access needs to use the memory unit) then every instruction that needs the MEM stage to use the memory bus will generate a structural hazard, as the CPU won't be able to perform the FETCH and MEM stages in parallel because they both need to access memory.If instead you have two ports to access memory (e.g, one for Instructions and another for data), then the structural hazard noted above will be avoided as each memory-access stage will use its own bus+memory.
Closed. This question needs to be morefocused. It is not currently accepting answers.Want to improve this question?Update the question so it focuses on one problem only byediting this post.Closed8 years ago.Improve this questionHow does having unique Data & Instructions memory affects us to the standard 5-stage Pipeline?What about with & without Forwarding?What's the advantage of having a different memory for each?
MIPS/Pipeline regarding unique Data & Instruction memory [closed]
The error messages from scripts 1..4 shouldn't vanish down a black hole; you only redirected their standard output as the files given todiff.For example, given these files:$ cat script1 #!/bin/bash echo $0 stdout echo $0 stderr >&2 $ cat script2 #!/bin/bash echo $0 stdout cat - echo $0 stderr >&2 $ cat script3 #!/bin/bash echo $0 stdout echo $0 stderr >&2 $ cat script4 #!/bin/bash echo $0 stdout cat - echo $0 stderr >&2 $The output from your command line is:$ diff <(script1 | script2) <(script3 | script4) ./script1 stderr ./script3 stderr ./script2 stderr ./script4 stderr 1,2c1,2 < ./script2 stdout < ./script1 stdout --- > ./script4 stdout > ./script3 stdout $
Using the answer found inHow can you diff two pipelines in Bash?I have written some shell scripts that I want to compare the output of:diff <(script1 | script2) <(script3 | script4)However, any errors printed to STDERR in any of the scripts in the subshell pipelines disappear. How can I get them to print in my outer level script (that contains the diff)?
How do I print STDERR from a subshell?
REChas the wrong type.It should beNKinstead ofNK_T, since you deal with a single record.
I create some function which returns table. I create record, then make record as table.CREATE OR REPLACE PACKAGE FOR_SELECT AS TYPE NK IS RECORD ( SHT NUMBER, VVD NUMBER, NEKOM NUMBER, N_GOL NUMBER); TYPE NK_T IS TABLE OF NK; FUNCTION NEK_SLU_PR(W_KOD VARCHAR2) RETURN NK_T PIPELINED; END FOR_SELECT;AND MY SIMPLE FUNCTION:CREATE OR REPLACE PACKAGE BODY FOR_SELECT AS FUNCTION NEK_SLU_PR(W_KOD VARCHAR2) RETURN NK_T PIPELINED AS REC NK_T; SHT NUMBER; VVD NUMBER; NEKOM NUMBER; N_GOL NUMBER; BEGIN SHT := 3; VVD := 4; NEKOM := 5; N_GOL := 6; SELECT SHT, VVD, NEKOM, N_GOL INTO REC FROM DUAL; PIPE ROW (REC); RETURN; END NEK_SLU_PR;END FOR_SELECT;I have 4 variables and all select in variable which have table type, but ora-000947 - not enough values( what I do wrong?
ora-000947 in PIPELINED function
So, you have two types of users in your app:1. User 2. CordovaUserYou need two different links for two different users and somehow you should know in the pipeline that one of them isCordovaUser.First, in your settings, do this:FIELDS_STORED_IN_SESSION = ['user_type']then the links will look like this:1. <a href="{% url 'social:begin' 'facebook' %}">Login as User</a> 2. <a href="{% url 'social:begin' 'facebook' %}?user_type=cordova">Login as CordovaUser</a>then customizecreate_userto look something like this:def create_user(strategy, details, user=None, *args, **kwargs): if user: return {'is_new': False} fields = dict((name, kwargs.get(name) or details.get(name)) for name in strategy.setting('USER_FIELDS', USER_FIELDS)) if not fields: return user_type = strategy.session_get('type') if user_type != 'cordova': return { 'is_new': True, 'user': strategy.create_user(**fields) } else: return { 'is_new': True, 'user': create_cordova_user(**fields) }then, create thatcreate_cordova_usermethod and you are done.Hope this helps!
I need two distinct login processes in my django server.Login for website users (I already have this)Login for app users - Cordova InAppBrowserThe login pipeline for app users must also generate a token and return it to the Cordova app. How should I go about creating a parallel pipeline.
Custom pipeline for Cordova social auth
You just need to escape the\:std::string cmd = "awk -F\\| '{ var1=$2; var2=$5; print var1, var2 }' 1.txt"; //-----------------------^When you create your C++ string,\is treated as an escape character, unless you escape it.Alternatively in this case, you could just quote the delimiter, avoiding the need to escape it:std::string cmd = "awk -F'|' '{ print $2, $5 }' 1.txt";I assume that your script may be more complex that this in reality but I simplified it for you anyway, in case that helps.A final option, using raw string literals (which are supported in C++11):std::string cmd = R"awk -F\| '{ print $2, $5 }' 1.txt";TheRprefix means that the\is no longer interpreted as an escape character.
I have the awk command wch I want to wrap in system().std::string cmd = "awk -F\| '{ var1=$2; var2=$5; print var1, var2 }' 1.txt"; system(cmd.c_str());Getting the error as,error: unknown escape sequence '\|'I have tried giving the quotes in different way. But, nothing helps.
pipe inside (awk) System command in cpp
what about adding thePersonitem as a field for theAnotherPerson?. Remember you could always use themetaparameter on the requests to pass information between callbacks.You could do something like:parse_person(self, response): ... yield Request(self, url=someurl, callback=parse_anotherperson, meta={"some_key":"some_person_id"})Then you could add a reference to the previousPersonon yourAnotherPersonitem as a field or something else.
I use scrapy in order to scrape a social network and then get the data in a NEO4J database.My challenge here is to relate 2 items each other:class person(scrapy.Item): name=Field() class AnotherPerson(scrapy.Item): name=Field()I want to save those two items in my graph database by saying:Person has relationship with AnotherPerson()What I need here is to send two items in ONE pipeline !! How can we do this ? I tried to send it through a list, but scrapy doesn't accept the list as soon as a collection is in there.Here is my pseudo code:I get a list of person (each person has profile and a list of firends like facebook)For each person in this list:I open his profile (through a request and send the response to a callback)I take the response and create a item: Person() and fill itI send the item with a "yield"Then I open his list of friend (through a request and send the response to a another callback)I have the friend list pageThen For each friend in this list (the page display a name and a city):create an item: AnotherPerson()I fill this item with the name and the cityI send the item with a "yield"I have two pipelines. They work well to save the data in database, but I don't have any clue to how I can relate them because for that I need to do that in the same process (ie. pipeline).I'm not sure if I've been clear, so don't hesitate to ask for clarifications.
Multiple Items into one pipeline -- NEO4J database with scrapy use case
Perhaps you needteecommand. You can use it to "fork the pipe", which means that an output file could be created after a specific pipe input command. It is very useful when looking for errors. For example:cat access.log | some filter commands | tee out01.txt \ | some other filter | tee ou02.txt | ./your_script > summary.csvMore exampleshere.
I'm at a bit of a loss on how to proceed with the task of piping log file contents to a csv file based on certain criteria. Essentially, the problem is something like this:Write a script that receives http logs (or any arbitrary .log file) via pipe input, and outputs a summarized csv in the number of hits per url per day.Example: executing the pipe commandcat access.log|some filter commands|./your_script > summary.csvcreates a text file called summary.csv with the content:" Action and path. 2015-01-01, 2015-01-02. 2015-01-03GET /index.php, 34, 53, 65POST /administrator, 32, 59, 39..."and so forth.The problem I'm facing at the moment is figuring out how to identify and execute specific parts of the pipe input command, and apply filters, before feeding it to the output pipe. From what I'm familiar with, an array of command parameters (such as "cat", "gedit", ">", "|", etc) might work, but this leaves the problem of identifying them and executing them as a pipe command would, instead of just one after the other.I've searched quite thoroughly, but as yet found nothing even remotely helpful, aside from the suggestion to divide the pipe command into separate instructions and execute them one by one. If anyone can suggest an easier and more effective way to do this, or any advice on this particular problem, it'd be much appreciated. Thanks in advance.
Pipe logs to csv in bash script
I suppose the best explanation is in your textbook, StackOverflow is a little narrow to go into in-depth explanations, but anyhow -A pipeline is simply a way to break down a chain of tasks into sequential steps. A pipelined CPU has special latches between the steps that synchronize with the clock, so that on every clock cycle, each of the steps can perform its task, and send the results to the next one. The big benefit from pipelining comes from the fact that once you fed the first element into the pipe (be that an instruction, or a bag of laundry, or whateve), the next cycle it's free to accept the next one, long before the previous elements finished the complete datapath (as a multi-cycle MIPS would do).This allows you, in a steady-state, to feed one element into the pipe per cycle, assuming no control/data hazards are detected. Therefore, the peak throughput of the machine remains 1 element per cycle (IPC=1 in the CPU nomenclature), while you have virtually no restriction on the length of the cycle! Theoretically, you can divide the work into more stages that are simpler and shorter, and shorten the cycle time (raising frequency, and overall throughput of work per time).Of course there are limits, as the CPU industry has discovered not so long ago - once the pipe becomes too large, the penalty of a flushes (we ignored earlier) become a huge penalty, so it's not all roses. Finding the sweetspot of depth and complexity is basically the key point in pipeline design.
What does it mean to have a pipelined datapath in MIPS architecture?All the examples I have read include doing laundry and waiting for certain tasks to finish, before moving on to other ones are fairly simple to comprehend.I was hoping for a more in depth technical explanation of how exactly a pipelined datapath helps MIPS architecture run faster and how stalls work.
Pipelined Datapath
We'd need to have the full datapath details from your textbook or assignment (I assume) to be sure. Here are some possibilities:Thesubinstruction stalls for 2 cycles before entering Execute, because it needs to wait for thelwresult to finish WritebackTheaddinstruction stalls untilsubis finished Writeback, again because it's waiting for a result.The detail that's relevant to both of these possibilities is when results from a given instruction become available to later dependent instructions. Does your datapath have register bypass of some sort? Are results in Writeback available the same cycle, or the next?
I'm trying to understand how the following MIPS code in a pipelined datapath would execute.lw $4, 100($2) sub $6, $4, $3 add $2, $3, $5The MIPS instruction set has 5 stages (fetch, decode, execute, memory access, write back).The answer is: 8 cycles but I'm having difficulty understanding why. Here is what the pipeline could look like (incomplete).C F D E M WB 1 lw 2 sub lw 3 add sub lw 4 add sub x lw 5 add sub lw 6 add sub 7 8Questions: Why the x (stall?) at 4 and 5? How do I come up with the cycles including 7 and 8?
MIPS pipeline cycles
we are not supposed to do your homework. if you assume that you can split the operations evenly into 5 stages and data dependencies are ignored, then the speedup would be 800/(160+40)=4
Calculating the maximum speedup of a single cycle CPU converted into a 5 stage pipelined CPU.Single cycle has a time of 800psThe pipelined stages are separated by registers that take time 40ps.What I have so far is:800/(40*5) = max speedup of 4.I'm not sure if I'm going about this correctly.
Max speedup in pipelined CPU
What @David said, using an iframe would probably do what you want. In that vain you can detect if you're in an iframe withvar isInIFrame = function() { return window != window.top; };I use that the change my CSS depending on if I'm in an iframe or notvar updateCSSIfInIFrame = function() { if (isInIFrame()) { document.body.className = "iframe"; } };Then I can use CSS to change formatting. eg:/* only applies if in an iframe assuming the function above was called. */ body.iframe { width: 100%; height: 100%; margin: 0px; padding: 0px; overflow: hidden; } .iframe>canvas { width: 100%; height: 100%; }
I am doing some work with Pipeline Pilot and noticed that all of the built in HTML components that do things, like collapsible panels, tabs, or anything else that presumably has some javascript that I can't access causes my otherwise working WebGL component to break on load.Is there a way to "sandbox" or otherwise isolate a WebGL component for it's own protection? Weird question, and not the best way to look at it, but I can't change any of the code inside of the WebGL component, and I can't change any of the internal Pipeline Pilot code, so I need an inelegant solution of any kind.
Sandbox WebGL Plugin
You can use -ErrorVariable without a Try-Catch construct and even without -ErrorAction. So, for example, you could do this :Get-ADUser -Filter {Enabled = 'False'} -Properties Modified -ErrorVariable Get-ADuserErr |` Where Modified -LT (Get-Date).AddDays(-30) -ErrorVariable Where-ObjectErr |` Remove-ADUser -Confirm:$false -ErrorVariable Remove-ADuserErr If ($Get-ADuserErr) { Write-Output $Get-ADuserErr } If ($Where-ObjectErr) { Write-Output $Where-ObjectErr } If ($Remove-ADuserErr) { Write-Output $Remove-ADuserErr }That way you have a different variable to store the error of each command and you get the default error handling behaviour (from $ErrorActionPreference , which is "Continue" by default).Also, your error variables ($Get-ADuserErr, $Remove-ADuserErr, etc...) are objects which can be piped to Out-File to log the errors into a file.Even nicer, these error variables are object of the type "ErrorRecord" which have a number a properties like : TargetObject, Exception, InvocationInfo, fullyQualifiedErrorID, etc... So you can further manipulate you error variables objects to log exactly what you want.
I was wondering if someone would be able to give me some tips on handling non terminating errors through the pipeline? Here is an example of what i would like to do.Get-ADUser -Filter {Enabled = 'False'} -Properties Modified | Where Modified -LT (Get-Date).AddDays(-30) | Remove-ADUser -Confirm:$falseBasically find all users that are disabled where the last modified time stamp is less than 30 days ago and delete them. So what i'm looking for is how to handle and error if one where to occur so that it could be loggedI thought of using try catch, but if an error occurs then the entire command stops. I want it to be a non terminating error, so that it will continue to process all the users that it find, but then have a way to see if there where any error so that i can log themAny suggestions?
Handling error through the pipeline for logging
I have made a custom command for this which:asks for a site name (using Context.ClientPage.Start and Context.ClientPage.ClientResponse.Input)creates a sites structure based on a template-branchcreates media library foldersI did not need users but you can add that as well. The only thing left to do manually is adding the configuration files.There are many blog posts to find about adding custom commands, like:http://www.sitecore.net/Learn/Blogs/Technical-Blogs/John-West-Sitecore-Blog/Posts/2013/04/Add-Debug-Command-to-Content-Editor-in-the-Sitecore-ASPNET-CMS.aspxAnd also branches:http://www.sitecore.net/Learn/Blogs/Technical-Blogs/Getting-to-Know-Sitecore/Posts/2012/10/Page-Editor-Secrets-1-Complex-Content-Creation.aspx
I have a sitecore 7.0 solution with several sites. Every time that I add a new site I need to create specific users (author, approver, etc..) and a specific Media Folder to the website.Question: Is there any way to make this automatically? I was thinking to play a bit with the pipelines but I'm not sure exactly how to start..Thank you
sitecore Add users and Media folder after create sub site
See all instructions in a single cycle non pipelined structure do not necessarily take same amount of time rather the next instruction to be executed after an instruction can not start until the next clock cycle ,current instruction may complete before the current cycle because cycle length is determined by the longest instruction.e.g add register completes before load in a RISC.Now in a pipelined structure processor ismultistage with register to store and propogate the state of processor.Now basically on pipelined processor we save time by overlapping two instructionss' substages.hence even though individually the length of instruction is increased but overall time has reduced.Now see every instruction may not go through all the stages eg load and add again So overall latency for each instruction will consist of all the stages but its execution may have had taken less number of cyclesSo you can say that latency of each instruction is same but not the execution time or cycles consumed
I understand that single-cycle programs are not very efficient. One reason is because not all instructions are equal in length, but in a single-cycle program, all instructions are completed in the same length of time.In pipeline, throughput is increased, which means the time between one output and the next will be shorter than in a single-cycle implementation after you reach a certain point. But then can you say that instructions in a pipelined approach take the same amount of time (going from IF/Instruction Fetch to WB/Writeback)? Or is this the wrong conclusion?
Single-cycle vs a pipelined approach
You'll need to create your own function for that if you want to do it in the pipeline:function format-conditional { param ([bool]$condition) if ($condition) {$input | format-wide } else {$input} } $test = $true gci | format-conditional $test
I want to conditionally applyFormat-Wideto a pipeline:Get-ChildItem | Format-WideHow can I make| Format-Widepart conditional on a variable? For example, apply| Format-Wideonly if$conditionisTrue.Edited: the following is what I want to achieve:function format-conditional { param ([bool]$condition) if ($condition) {$input | Format-Wide -Column 3 } else {$input} } Invoke-Expression ("Get-ChildItem $Args") | %{ $fore = $Host.UI.RawUI.ForegroundColor $Host.UI.RawUI.ForegroundColor = 'Green' echo $_ $Host.UI.RawUI.ForegroundColor = $fore } | format-conditional $falseBut with this theGreencolor is gone.
Conditional Format-Wide in pipelines
Short version, using your code:$s = "" cat $f | % { if ($_ -match "^-") { $s += $_ } else { $s; $s = $_; } } -end { $s } | out-file x.txtlonger version:function glue { [CmdletBinding()] param( [Parameter(ValueFromPipeline=$true)] $line ) begin { $output = "" } process { if ($line -match "^-") { $output += $line } else { $output; $output = $line; } end { $output } } cat $f | glue | out-file x.txt
I need to process a file that contains some lines beginning with a dash (-); these are continuation lines that need to be appended to the previous line. so what I have is something like:Lorem ipsum dolor sit amet, consectetur - adipiscing elit. Donec - consectetur lotis. Sed a est dui. Curabitur placerat a tortor - vel sodales.and what I want is:Lorem ipsum dolor sit amet, consectetur - adipiscing elit. Donec - consectetur lotis. Sed a est dui. Curabitur placerat a tortor - vel sodales.so I've written something like this:$s = "" cat $f |%{ if ($_ -match "^-") { $s += $_ } else { $s; $s = $_; } } |out-file x.txtMy question is: in cases when the file ends with dashed lines, the script never outputs the final line, because upon receipt of the final line from the pipe, it doesn't know it's the final line.In Perl I used to be able to use a construct like END {} to do these things. How would that be handled in Powershell?UpdatePerl is relevant because in that language I could do something like (wheretxtis a file containing the relevant text):perl -lne ' BEGIN { $s = "" }; if (/^-/) { $s .= $_ } else { print $s; $s = $_; } END { print $s; } ' txtwhere, as you can see, the END{} construct solves my problem
Capturing the last line
You should be able to specify the retry semantics on the send adapter which makes the call to the web service.
I have a custom receive pipeline in BizTalk 2009 which after receiving the message from a file location connects to a web service to fetch some data. Sometimes there is some issue with the web service and Biztalk is not able to connect to the same and hence the message gets suspended. If we resume the message after say 1-2 minutes the message is processed. So I want to implement a retry mechanism in the receive pipeline only so that there is no manual effort required. can this be done somehow?
Can we implement retry mechanism in a custom receive pipeline of biztalk
pipeline.InOrder()andpipeline.After()are only for ordering pipelines execution, not code execution.There is a method calledfinalized, that is executed right after the last output is written, that is when yourLog2Bq()pipeline finish it's execution, so:class Log2Stat(base_handler.PipelineBase): def run(self, _date): print "start track" yield pipelines.Log2Bq() def finalized(self): print "finish track" taskqueue.add( url='/worker/update_daily_stat', params={ "date": str(_date.date()) } )If you want to use pipeline.InOrder() or pipeline.After() you should wrap your taskqueue code in other pipeline and yield it afterpipelines.Log2Bq()with pipeline.InOrder(): yield pipelines.Log2Bq() yield pipelines.YourOtherPipeline()
https://code.google.com/p/appengine-pipeline/wiki/GettingStarted#Execution_orderingI tried to add a callback function which execute after the Log2Bq done. But it doesn't work either I usepipeline.Afterorpipeline.InOrder. In the following code sample, the taskqueue will execute immediate without waiting for Log2Bq. To fix the issue, do I need to create another pipeline to hold the taskqueue in order to make the execute order works?class Log2Stat(base_handler.PipelineBase): def run(self, _date): print "start track" with pipeline.InOrder(): yield pipelines.Log2Bq() print "finish track" taskqueue.add( url='/worker/update_daily_stat', params={ "date": str(_date.date()) } )
Appengine pipeline, how to let function execute right after the pipeline work finish
You should be able to do this withsqoopby using the option called--staging-table. What this does is basically act as an auxiliary table that is used to stage exported data. The staged data is finally moved to the destination table in a single transaction. So by doing this, you shouldn't have consistency issues with partial data.(source:Sqoop documentation)
Background :I have a Hive Table "log" which contains log information. This table is loaded with new log data every hour. I want to do some quick analytics on logs for past 2 days, so i want to extract last 48 hours of data into my relational database.To solve the above problem I have created a staging hive table which is loaded by a HIVE SQL query. After loading the new data into the staging table, i load the new logs into relational database using sqoop Query.Problem is that sqoop is loading data into relational database in BATCH. So at any particular time i have only partial logs for a particular hour.This is leading to erroneous analytics output.Questions:1). How to make this Sqoop data load transactional, i.e either all records are exported or none are exported.2). What is best way to build this data pipeline where this whole process of Hive Table -> Staging Table -> Relational Table.Technical Details:Hadoop version 1.0.4Hive- 0.9.0Sqoop - 1.4.2
How to create a data pipeline from hive table to relational database
you can use decodebin2 or uridecodebin element. These elements contain autodetct capabilities.http://gstreamer.freedesktop.org/data/doc/gstreamer/head/gst-plugins-base-plugins/html/gst-plugins-base-plugins-uridecodebin.html
Using gst-launch is there any way to select a GStreamer Element based on the capabilities determined by a source element? For example, if I have a network source element which could download an MP3, M4A, or WAV file, can I select which element to pipe the data to based on the file type received? The file type is not actually known until after the network source element is loaded. See the linked image below for the pipeline I have in mind. Any ideas?
Select GStreamer Element Based on Source Capabilities
HTTP PipeliningWikipediaandMozillahave good pipelining explanations. The following picture basically says it all.Normally (without pipelining), the client sends a request to the server and waits for a response before sending another request. With pipelining, however, the client sends multiple requests without waiting for the server's response.So, what does my server have to do to implement pipelining?Actually, not much. All a server has to do to support pipelining is ensure that "that network buffers are not discarded between requests" (wikipedia). All HTTP/1.1 servers support pipelining.The client is responsiblefor the bulk of the error handling (resending packets, etc.) and other headaches that come with implementing pipelining.
Can some one explain me how pipelined requests are treated on a server in python (or any other scripting language)?Suppose I have web services made in python callable by iOS client. The client pipelines the requests and sends them to server. How can I receive and handle these requests on the server and send the appropriate response?
how are pipelined requests handled on the server side in python?
$thirdParamcannot be an array with your implementation. When you write$thirdParam = @(), you do declare an empty array, but then you re-assign it to a string:$thirdParam = 'test3'and then to another string$thirdParam = $thirdParam + 'test4'. I am still not clear about your original intent, but here's how you would passtest3as third argument andtest4as the fourth:$fourthParam = 'test4' [void](& '..\SomeApp.exe' "$firstParam" "$secondParam" "$thirdParam" "$fourthParam"If you only have 2 fixed parameters, and you can have N parameters, switch toInvoke-Expressioninstead ofInvoke-Command:[void](Invoke-Expression "..\SomeApp.exe $firstParam $secondParam $thirdParam"and make sure your parameters are correctly quoted. In this case, if $thirdParam contains spaces, it will determine your parameter #4 and onwards.
Wondering how I could pass an array as command line argument to an exe file in powershell? Here is what I am currently working onI am calling an exe file from a powershell function in the following format$firstParam = "test1" $secondParam = "test2" $thirdParam = @() $thirdParam = 'test3' $thirdParam = $thirdParam + 'test4' [void](& '..\SomeApp.exe' "$firstParam" "$secondParam" "$thirdParam"Here is what I am seeing as the input arguments in the Application.exeThe third input parameter that was passed from the powershell was an array but it got concatenated (space separated) when passed to the exe file.Is it possible to pass "test3" as the third argument and "test4" as the fourth argument?
Powershell - Pass an array as argument to an executable called from command line
I think you are trying to run before you can walk. That said, let's take a look at your first example above:function my-function { process { $_ | select-string "hello" } } "hello" | my-functionNow, you may be saying to yourself: "Hey, that's not what I asked!" and to this, I offer the following advice on Invoke-Expression (iex): Executing arbitrary text like this can be dangerous, and is rarely required. It generally indicates a lack of understanding of building interoperable functions and modules. Even throwaway scripts rarely require IEX.So, I'm not just going to leave it there but I can't reasonably answer your question without a higher level point of view on what you're trying to do. It sounds like you're trying to generalize some process for executing script, or scripts. So, my advice would be to study the various native mechanisms for doing this in PowerShell, namely the dot source (.) and call (&) operators.And finally, grab a coffee and sit back to read through Keith Hill's most excellent "Effective Windows PowerShell" free ebook:http://rkeithhill.wordpress.com/2009/03/08/effective-windows-powershell-the-free-ebook/The first few chapters will help you tohelp yourselfwhen learning the right way to do things. It covers the primary tools of PowerShell: get-command, get-help, get-member and understanding the difference betweencommandandexpressionmodes.
I can't get my function to work using iex (invoke-expression) with variables. What I want:function My-Function { # code that constructs $command #$command is an arbitrary string that may contain pipes, #variables, command line paramaters, switchs, quoted strings, THE LOT! Execute-My-Command-Exactly-As-Though-I-Had-Typed-It-Here $command }For examplefunction My-Function { $command = "`$input | select-string hello" iex $command }Does this (after . including my file which contains the above function)PS C:\> echo "hello" | My-Function PS C:\>But obviously it should do this:PS C:\> echo "hello" | My-Function hello PS C:\>Moreover on the actual command line, variables DO seem to be picked up by iex:PS C:\> $hello = "hello" PS C:\> PS C:\> iex "`$hello | select-string hello" hello PS C:\>The only workaround I can think of is to actually write out a function to a file, then . include it, then call it. But this is awful! The whole point of a good shell-scripting language is it should be able to do this meta stuff easy peasy.SOLUTION:PowerShell can't do it, though latkin provides a helpful tip.
How do you execute an arbitrary string inside a powershell script/function as though the string had actually been typed into the script/function?
Where you defined your pipeline is fine.args.Context.Requestshould be available at this step in the request processing. The most likely cause is that this processor is being evoked under certain circumstances where the Context is not available. A simple check for the following should handle those cases:if (args.Context != null) { //.... }The only other explanation I can think of is thatGetState()orRedirectUserByState()are callingHttpContext.Currentwhich is not available at this point (hence why Sitecore passes the context as an argument).Also, a load-balancer would not explain the exceptions, but you may have better luck checking for the following server variables if the IP ends up always be the same:args.Request.ServerVariables["HTTP_X_FORWARDED_FOR"] args.Request.ServerVariables["REMOTE_ADDR"]
I am trying to do this with an httpRequestBegin pipeline processor, but I don't seem to be able to access the user's IP address from the given HttpRequestArgs parameter.When I implement a class that has this methodpublic void Process(HttpRequestArgs args) { string ipAddress = args.Context.Request.UserHostAddress; // Not working string state = GetState(ipAddress); // already implemented elsewhere RedirectUserByState(state); // already implemented elsewhere }I thought that this might hold the user's IP addressargs.Context.Request.UserHostAddressbut it causes this error instead (stack trace says it originates from the Process method):System.Web.HttpException: Request is not available in this contextAny ideas? Thanks!Edit, this is in Sitecore 6.1 and in the web.config at<pipelines> <!--...--> <httpRequestBegin> <!--...--> <processor type="Sitecore.Pipelines.HttpRequest.ItemResolver, Sitecore.Kernel"/> <processor type="MySolution.Redirector, MySolution"/> <processor type="Sitecore.Pipelines.HttpRequest.LayoutResolver, Sitecore.Kernel"/> <!--...--> </httpRequestBegin> <!--...--> </pipelines>This server might be behind a load balancer.Edit 2:It looks like Iwastrying to access both of these inside the Process() function and that is what was causing the error:Sitecore.Context.Request Sitecore.Context.Response
Using a Sitecore CMS pipeline processor, how do I redirect a user based on their IP address?
Double click the green arrow. You will have the following dialog:Choose the "Logical OR" like in the above image. This will make your green lines become dotted. Now you can connect several arrows to the same task.The way you had the task worked like anAND, meaning that the task would only execute if BOTH the constraints were successful.
I want to execute the same task in two different conditions. They are 1. If the job is successful and RowCount is 0. 2. If the job is successful and it has executed other tasks.I tried to use the same SQL Execute Task but regardless of the outcome, it does not execute that task. Please see the screenshot below.But I replicated that job and connected directly to the other job. At that time, it works. Does it mean that we can't have different route to execute the same task?Thanks.
Executing the same SSIS Task via different Route
I suggest that you import the Python module with your button before calling the function. Assuming your script is in maya/scripts/tep.py, your button would do the following:import tep tep.psource()If you wanted to modify the script and keep running the fresh version every time you hit the button, do this:import tep reload(tep) tep.psource()And if you want your module to load on Maya startup, create a file called userSetup.py in your maya/scripts directory and have it do this:import tepThen, your button can simply just:tep.psource()Or...reload(tep) tep.psource()
I am trying to figure out how to use Python in Maya. I wanted to create a shelf in Maya and when I click that shelf, it will execute a file containing python code.First thing, I figured out that we can't simplysourcepython script. I followedthistutorial, so now I have a functionpsource(). In my shelf, I can just callpsource("myPythonScript")My problem is I have to somehow registerpsource()when Maya first loaded.Any idea how to do this?
How to automatically execute python script when Maya first loaded
The server side request queue is provided by application container and not by servlet itself.In case of Tomcat the component responsible for that is called Connector which can be configured (server.xml) in terms of the number of threads that serve incoming requests, timeout that request can stay unprocessed in the queue, the size of the queue, etc.Have a look atTomcat Connector documentation, I believe the most important would be 'acceptCount', 'maxThreads', 'connectionTimeout', 'maxKeepAliveRequests' (if you require http keepAlive ).
Can anyone suggest a simple setup of a servlet in java that supports pipelining?(It is for unit testing, so simplicity is better than scaleability).
How can I test client pipelining in java?
You can use the HttpContext (the one exposed by the static Current property). It has an Items property which is meant to be used specifically to shuttle data between modules and handlers, as indicated by the documentation:Gets a key/value collection that can be used to organize and share data between an IHttpModule interface and an IHttpHandler interface during an HTTP request.Which can be located here:http://msdn.microsoft.com/en-us/library/system.web.httpcontext.items.aspx
From an HTTP module, is there a way to persist something in the Request to be accessed by a mater module or the eventual handler? Can you "stick" something on the Request as it passes through that will still be there later in the pipeline?
How do you persist data from one HTTP module to another and to the eventual handler?
Assuming that the relevant parameter in top-level job is namedACTIONwhile the parameter in the downstream job is namedWHICH_ACTION, you may want to try this:echo "You chose: ${params.ACTION}" build job: '<downstream job name>', parameters: [ string(name: "WHICH_ACTION", value: params.ACTION), ]
I have an upstream job where user should be able to choose which stage to execute from a downstream job, but I'm struggling with correct syntax to pass these values on at a build job stage. I haven't found a solution to pass choise parameters to build.I've created choicesparameters { choice( name: 'ACTION', choices: ["Stop", "Deploy"], description: 'Choose whether to stop or deploy services' ) }And then based on those values the downstream job would execute the concurrent stagestage('Stop') { when { expression { params.ACTION == 'STOP' } } steps {`Now I would like to pass these values at the build phase, but here I'm lostecho "You chose: ${params.ACTION}" build job: 'stage/{app}', parameters: "WHICH_ACTION=${params.ACTION}"),The error I get isjava.lang.ClassCastException: class org.jenkinsci.plugins.workflow.support.steps.build.BuildTriggerStep.setParameters() expects java.util.List<hudson.model.ParameterValue> but received class java.lang.String at
Pass choice parameters to downstream job in jenkins
.Net MAUI does not support Linux, therefore you can neither build to it or from it.Seehere
I have created a.NET8 MAUI Class Libraryto use in MAUI projects. The repo is inAzure DevOpsand I was trying to build and publish the package via NuGet.For that, I wrote a YAML filetrigger: - main pool: vmImage: ubuntu-latest steps: - task: UseDotNet@2 displayName: 'Use dotnet 8' inputs: version: '8.0.x' - task: CmdLine@2 inputs: script: 'dotnet workload install maui' - task: DotNetCoreCLI@2 displayName: Restore packages inputs: command: 'restore' feedsToUse: 'select' vstsFeed: 'c800d0d7-e2af-4567-997f-de7cf7888e6c' - task: DotNetCoreCLI@2 displayName: Build project inputs: command: 'build' projects: '**/PSC.Maui.Components.BottomSheet.csproj' arguments: '--configuration $(buildConfiguration)'When the pipeline runs, I get this errorGenerating script. Script contents: dotnet workload install maui ========================== Starting Command Output =========================== /usr/bin/bash --noprofile --norc /home/vsts/work/_temp/42901c0d-f407-4f75-912b-f93132efa865.sh Workload ID maui isn't supported on this platform. ##[error]Bash exited with code '1'. Finishing: CmdLineThen, I tried to create the NuGet package locally, but it was not recognized by the NuGet website when I uploaded it.How can I change the pipeline?
Create a NuGet package for .NET8 MAUI with Azure DevOps
First, you must ensure thatnpm installworks on your local machine. Often the problem comes from the fact that you use^to specify the version of your dependency, see more details about semantic versioninghere. In this case, you do not have much control over the exact version of dependency and their dependencies, and sometimes, the project breaks. I would recommend removing^to control exact versions manually, and runningnpm outdatedfrom time to time, to see if you want to proceed with updates. Silent automatic updates may give you surprises, such as it worked yesterday, but stopped working today "for no reason".Then, you should ensure that your deployment uses the same versions of all the dependencies. This is why the filepackage-lock.jsonexists. Seeherefor more explanation. I recommend committing this file to your repository and runningnpm cito install the dependencies, seehere.A similar approach should be applied toyarnpackage manager or others.
When I deployed my changes to the master branch, the pipeline failed. If I'm correct, the error indicates a conflict in the versions of @angular/common required by different dependencies in the project. The root project requires @angular/common@"^16.0.0", while[email protected]requires a peer dependency of @angular/common@"^17.0.4".Here is the error in the pipeline that failed during npm install:npm ERR! code ERESOLVE npm ERR! ERESOLVE unable to resolve dependency tree npm ERR! npm ERR! While resolving:[email protected]. npm ERR! Found: @angular/[email protected]npm ERR! node_modules/@angular/common npm ERR! @angular/common@"^16.0.0" from the root project npm ERR! npm ERR! Could not resolve dependency: npm ERR! peer @angular/common@"^17.0.4" from[email protected]npm ERR! node_modules/ngrx-store-localstorage npm ERR! ngrx-store-localstorage@"^16.0.0" from the root project npm ERR! npm ERR! Fix the upstream dependency conflict, or retry npm ERR! this command with --force, or --legacy-peer-deps npm ERR! to accept an incorrect (and potentially broken) dependency resolution. npm ERR! npm ERR! See /root/.npm/eresolve-report.txt for a full report.
How to know which version of angular/common to use to satisfy the dependencies?
While it is straightforward to update a FCT table from given large table (you have to select fact columns from big table and insert them into FCT table), updating a DIM table from a given big table poses one major problem - slowly changing dimensions.In the big table - It is theoretically possible to have different car names (or any car detail) for the same car ID. In such a case your dim table would also have to track history of changes to a particular car ID. This is so that you can correctly join the FCT and DIM tables to get back your original table.To overcome this you will have to add thedate_transactionfield from your big table to the DIM table, effectively creating a composite key (id, date_transaction) on the DIM table.Your flow would look likeNote - Filtering out new DIM rows that have changed can be an expensive process. However, not doing so will mean your DIM table has as many rows as your FCT table (which kind of beats the purpose of decomposing).
I have very large table that contains repeated dimensions as strings and it has more than 40 million rows. This table is populated with millions of records every day.The facts and dimensions table would reflect the information inside the very large table. The facts and dimensions are updated with the new information inserted in the big table and the operation should be fast (considering millions of new rows are inserted in the big table every day)Here's an example:Here's is the pipeline I assume is the most efficient and fastIs there any way to do this pipeline efficently in SQL Server? Is there another method that would be faster?
Efficiently sync/create a dimensions and facts tables from a very large table
If multiple pipelines are triggered simultaneously, it is not possible to order the runs of stages like as you require.for your case, as a workaround, you can consider the method like as below:Within each pipeline, you can use the 'dependsOn' key to order the runs of stages.# Stage 2 will run after stage 1 complete. # By default, stage 1 should complete successfully. stages: - stage: 1 . . . - stage: 2 dependsOn: 1 . . .To order the runs of pipelines, you can use pipeline resource trigger (Pipeline completion triggers).In pipeline for App2, set the configuration like as below.trigger: none # Disable CI trigger. resources: pipelines: - pipeline: App1 source: App1 # Actual name of the pipeline for App1. trigger: trueSimilar, in pipeline for App3, set the configuration like as below.trigger: none # Disable CI trigger. resources: pipelines: - pipeline: App2 source: App2 # Actual name of the pipeline for App2. trigger: trueWith above configurations, the stages will run following the order like as you expect.
I have microserviced monorepo in Azure Devops, with pipeline for each of the nested services.root /app1/app1-pipeline.yml /app2/app2-pipeline.yml /app3/app3-pipeline.ymlEach of the pipelines has 2 stages:For building and pushing docker imagesCheckout to another repository, running py script and git pushing output (with metadata).When for example 3 of the pipelines are triggered by the same commit , I can see that the order of execution of stages is the following:App1 - stage 1App2 - stage 1App3 - stage 1App1 - stage 2App2 - stage 2App3 - stage 2I am looking for a way to "force" the order of execution to the following:App1 - stage 1App1 - stage 2App2 - stage 1App2 - stage 2App3 - stage 1App3 - stage 2In other words I want the runner to sequentially execute the pipelines one by one, instead of mixing them.I couldn't find a solution for it.Is this possible or maybe it is not recommended(why)?Thank you in advance!
Sequencing the order of stages in multiple pipelines in Azure DevOps
Yes, that is possible. For writing in BigQuery without using a window, you will need tousemethodequal STREAMING_INSERTS or STORAGE_WRITE_API (maybe withuse_at_least_onceset toTruefor even lower latency, but with risk of duplicates).For the rest of the pipeline, you just need to have two branches. Something like this:msgs = p | "Read from P/S" >> beam.io.ReadFromPubsub(...) dictionaries = msgs | "Transform to dict with some schema" >> ... dictionaries | beam.io.gcp.bigquery.WriteToBigQuery(...,method=STORAGE_WRITE_API, use_at_least_once=True,...) dictionaries | beam.WindowInto(FixedWindows(size=300)) | ... # (aggregate, write to another BigQuery table, etc)
I am trying to think of how to architect some data pipeline needs, and I simply want to know if the following is possible:Can I create an Apache Beam pipeline that can stream data fully real time while also aggregating into windows? Specifically, I would like to:Read data in from a Pub/Sub subscription.Send that data onward immediately to a BigQuery table, as is.Create an Apache Beam window of, say, 5 minutes.Aggregate that same data read in from the Pub/Sub message (with some other light transformations).Write the aggregated/transformed data into a different BigQuery table.As an always-on streaming pipeline process.I know that I can do this in other ways. For instance, I can create 2 dataflow jobs / pipelines that listen to the same subscription. (Or would it be better to have 2 separate subscriptions listening to the same topic?) I can also create a subscription for the dataflow job, then another subscription (to the same topic) that just pushes to BigQuery immediately.But if I could have one set of code -- and therefore one CI/CD job to monitor -- to accomplish both, it simplifies what we need to maintain, and it would be much preferred.Is this possible?
Is it possible to create a Beam pipeline with multiple windowing needs
When your pipeline completes, you call stop and then you delete the pipeline. When you delete the pipeline it will destroy all of the underlying resources that was created to execute it.execution.stop()pipeline.delete()https://docs.aws.amazon.com/sagemaker/latest/dg/run-pipeline.html#run-pipeline-delete
I saw that for each step in sagemaker pipeline we define instances on which the step will be run. So, after the completion of each step is the instance destroyed, or does it keep on running?
Are instances used in sagemaker pipeline destroyed automatically after the pipeline completion?
Pandas and PySpark are very different. Although PySpark provides inter-operability with Pandas, a pandas DataFrame is very different from PySpark/Spark DataFrame.Understand the differences between Pandas and PySpark before you write any code.There are two parts to your problem. First is understand how to read/write to/from csv and parquet files, which reside on your laptop's hard disk. Second is how to use ADSL instead of local hard disk.For first part:See the examples that come withPySpark docs. E.g.how to read from csv into a DataFramehow to write a DataFrame to parquet.Also see the PySpark SQL API docs (PySpark SQL API is a python API, not SQL). E.g.read/write csvread/write parquetFor second part:When working with a cloud storage as underlying storage (ADLS, S3, ...), you need to:Prefix all your paths with appropriate scheme e.g.s3a,abfss, ...Install appropriate hadoop extension/library (that corresponds-to/supports the scheme) in your pyspark env. PySpark will use this to read/write to cloud storage.Set appropriate config params in Spark config or whatever way you use to authenticate.There are many guides available depending on your usecase,here is one, andhere is another.
I'm just stepping into the data world and have been asked to create a custom project where I need to convert a CSV to a parquet using a Notebook (PySpark). I've put this together so far, which seems to run without errors but there's nothing in my Parquet folder in ADLS.def convert_csv_to_parquet(input_file_path, output_file_path): # Read CSV file into a Pandas DataFrame df = pd.read_csv(input_file_path) # Convert Pandas DataFrame to PyArrow Table table = pa.Table.from_pandas(df) # Write PyArrow Table to Parquet file pq.write_table(table, output_file_path) # Open the Parquet file table = pq.read_table(output_file_path) # Convert the table to a Pandas DataFrame df = table.to_pandas() # Print the DataFrame print(df.head(100)) input_file_path = 'abfss://[email protected]/MySQL_Project-Table_Courses.csv' output_file_path = 'abfss://[email protected]/Parquet' convert_csv_to_parquet(input_file_path, output_file_path)
Convert a CSV to a Parquet in a Notebook
Dataflow activity won't give the input query in its input or output. As a workaround try the below method.Take a branch to your dataflow and usederived columntransformation. In this create a column andgive the same query expressionto this.You can add this by taking a branch to your current source or you can create another sample source and continue the same approach.Here, I have taken SQL query for sample.It will build the query and stores the query string in the columnquery. Usecache sinkto know this query at pipeline level. The write order for this sink should be 1.In the cache sink mapping, filter thequerycolumn using Rule based mapping like below.In the Dataflow activity, set the logging to None and select first row only.Now, you can find the input query in the Dataflow output cache sink.Use the below expression to use this in the pipeline.@activity('Data flow1').output.runStatus.output.sinkCache.value[0].query
I have a pipeline on ADF with aFor Loopactivity and inside I have aData Flowrunning a Kusto query. In this loop I am using a variable with 10 string in an array, so that my Data Flow will run based on those values as a parameter.What is happening is, some of the runs in the loop is failing, and some others is working fine.Example as image below:How can I see the query that is running inside the Data Flow with all parameters and expression that I am using? I mean, the final query as an output. I would like to run that same query on my Kusto to see if something is wrong or not.Based on @Rakesh Govindula answer, this is my data flow and this what I have tried from his answer:
How to capture output query from ADF pipeline from For Each activity
If you need a highly secure solution, then you should use a dedicated secrets management service such as Google Cloud KMS or HashiCorp Vault. If you are on a budget or do not need the highest level of security, then you can store your GCP keys as encrypted variables in the Terraform Cloud workspace.
I possess a key for accessing my Google Cloud Platform (GCP) project, along with SSH keys within the project. I need to incorporate them into my CI/CD pipeline, but it's not secure to simply push them directly to GitLab.I dont know if its possible with terraform cloud ?
How to secure key and authentication files on GitLab
Each pipe is essentially a class that handles a piece of logic. The class should define an handle method:namespace App\Pipes; class SomeServiceIntegration { public function handle($user, \Closure $next) { // ur codes return $next($user); } }You can use the Pipeline class to send an object through the defined pipes:use Illuminate\Pipeline\Pipeline; $integrations = $user->integrations; if ($integrations->count()) { // at here., you need to get the classes (pipes) for each integration $pipes = $integrations->map(function ($integration) { return $integration->serviceClass; // i assume that you store the full class path in the 'serviceClass' attribute })->toArray(); return app(Pipeline::class) ->send($user) ->through($pipes) ->thenReturn(); }If each service has its own settings, you may need to add settings:$pipes = $integrations->map(function ($integration) { return app($integration->serviceClass, ['settings' => $integration->settings]); })->toArray();And adjust the SomeServiceIntegration class to accept settings:namespace App\Pipes; class SomeServiceIntegration { protected $settings; public function __construct($settings) { $this->settings = $settings; } public function handle($user, \Closure $next) { // u can use $this->settings here easily :○ return $next($user); } }
I have aUsermodel, several external services are attached to it (Servicesmodel, multiple communication), I don’t know in advance how many external integrations there will be, what settings they have (each service is implemented as a separate class, and the integration settings are described in the properties of theServicesmodel).I want to use the Pipeline template. But I can't find any good Laravel code examples and documentation.It should look something like$integrations = $user->integrations; if ($integrations->count()) { return app(Pipeline::class) -> send($user) -> through($integrations) -> thenReturn(); }But i cannot understand how it works inside... how to implement Pipes classes?
how to use Laravel Pipeline?
By default Azure devops doesn't have this option but you can achieve this by disabling the get source which you can achieve by- checkout: noneThen add a cmd or powershell task and use git sparse checkout.https://git-scm.com/docs/git-sparse-checkoutNow get the parts of your repository using api.https://learn.microsoft.com/en-us/rest/api/azure/devops/git/items/get-items-batch?view=azure-devops-rest-6.1&tabs=HTTP
I've had this issue with both Git and TFVC on DevOps. I've established a pipeline that copies the git (or tfvc) directory to what is our test server, which is a local/physical windows server with iis running a coldfusion instance. The server is in the agent pool and the files are updating with WindowsFileCopy. The problem is that, even if I only change one file, it does this step "checkout test@main to s" which copies the whole directory to a folder in the C: drive (C:\agent_work\1\s), then from there it copies the whole directory to the target one ( c:\inetpub\wwwroot\intranet ). It will also sometimes create a new folder (_work\2, _work\3, and so on), so it adds to the HD. The whole process, even if I've only updated a single file, takes 3 minutes or more. So, can I somehow change my yml to only include an altered file, or at least to skip the step of copying it to _work?trigger: - main pool: name: 'Default' # Replace with the name of your agent pool demands: - agent.os -equals Windows_NT # Optional: Ensure it runs on Windows-based agents steps: - task: WindowsMachineFileCopy@2 inputs: SourcePath: '$(Build.Repository.LocalPath)' MachineNames: 'PR1-TEST' AdminUserName: 'ntdomain\admin_user' AdminPassword: '$(admin_password)' TargetPath: 'c:\inetpub\wwwroot\intranet'
Azure DevOps - Git pipeline with windowsfilecopy is copying the full directory
By default, Taipy runs in development mode, which deletes data and scenarios upon each execution.To retain scenarios, you can use either the 'experiment' or 'production' modes (see thedocumentation).To run in 'experiment' mode and create/run version V1:python main.py --experiment V1This ensures that your scenarios and data remain intact.To run your application with VS Code:Set up alaunch.jsonfile (you can create it under the debug tab in VS Code).Add the following to the file (.vscode/launch.json):"args":["--experiment", "V1"],
I created a Taipy application to create scenarios with predictions and metrics, but when I rerun my code after stopping it, I lose all my scenarios and data. Where does it come from? How can I keep my scenarios and data?
Taipy Scenarios not found after a rerun
When splitting the declaration of infrastructure over different configurations that are independently applyable, it's important to choose the division points carefully so that the different configurations reallyareindependently applyable. Terraform only works with one configuration at a time, so it cannot automatically resolve changes that would require simultaneous changes in two different configurations.This is not inherently a problem withterraform_remote_state-- the same problem could arise with any use ofdatablocks to depend on objects managed elsewhere. The root problem here seems to be that Y is declared to manage something that should perhaps actually be managed by X, because it needs to immediately change in response to other changes in X.Exactly how to resolve that will depend on exactly which infrastructure objects you're using in practice. Reorganizing which objects are managed by which configuration might solve the problem, but it also might introduce another similar problem. Sometimes successfully splitting infrastructure into two independent units requires a different design approach where e.g. there's some third unit that both X and Y depend on, which provides some way for X and Y to successfully collaborate.If you ask a new question where you name exactly the resource types you are using and share your configuration then you may get a more actionable answer.
I have two Terraform modules -XandY- whereYdepends onXthroughterraform_remote_statedata source.Xis deployed beforeYin the pipeline, which means that the pipeline fails wheneverXhas changes that cause resources to be re-created and/or deleted. In this case, the resource inXcannot be deleted as it's used by some resource inY.How is this problem normally solved? Is it an anti-pattern to useterraform_remote_statein the first place?
Terraform cannot re-create/delete resources in a module, because they're used by dependent modules through terraform_remote_state?
You can add a new schedule with a variable and change gitlab-ci.yml . For example:job: script: echo "Hello, Rules!" rules: - if: $CI_PIPELINE_SOURCE == "merge_request_event" when: manual allow_failure: true - if: $CI_PIPELINE_SOURCE == "schedule" && $SCHEDULE_TIMING == "hourly"and add new schedule with variable
I have several jobs that are set to be triggered only for scheduled tasks. Some job requires to run every hour and other is set to run every day.How do I set the job to run on schedule and also on specific schedule tasks/definitions? I know GitLab CI has an option to set rules for jobs to run on schedule but I could find an option to set a specific schedule given that I have many scheduled tasks.For example:job: script: echo "Hello, Rules!" rules: - if: $CI_PIPELINE_SOURCE == "merge_request_event" when: manual allow_failure: true - if: $CI_PIPELINE_SOURCE == "schedule"The rule- if: $CI_PIPELINE_SOURCE == "schedule"is very generic. I don't want some job to run every hour. I want it to pick the schedule that is set to run every day only for example
GitLab CI schedule task for specific jobs
I clonedthisrepository and tried running the test file in a specific order.My Azure cloned repository:-I tried running CreateAccountTest.spec.ts before ContactUsTest.spec.ts and it ran successfully with the code below:-My Yaml code:-trigger: - main pool: vmImage: ubuntu-latest steps: - task: NodeTool@0 inputs: versionSpec: '18' displayName: 'Install Node.js' - script: npm ci displayName: 'npm ci' - script: npx playwright install --with-deps displayName: 'Install Playwright browsers' - script: npx playwright test src/tests/files/CreateAccountTest.spec.ts displayName: 'Run CreateAccountTest.spec.ts' - script: npx playwright show-report test-results/results displayName: 'Show test report' - script: npx playwright test src/tests/files/ContactUsTest.spec.ts displayName: 'Run ContactUsTest.spec.ts' - script: npx playwright show-report test-results/results displayName: 'Show test report'You can refer thislinkto run the npx playwright command in different order by sepcifying the filter.Run npx command with file name my-spec and my-spec-2npx playwright test my-spec my-spec-2Run test of specific projectnpx playwright test --project=chromium
I got stuck when i'm creating a azure pipeline for my playwright test. I want the tests to run in a specific order.This is how my YAML file looks right now, but now it just run all testes in not the right order.trigger: - main pool: vmImage: ubuntu-latest steps: - task: NodeTool@0 inputs: versionSpec: '18' displayName: 'Install Node.js' - script: npm ci displayName: 'npm ci' - script: npx playwright install --with-deps displayName: 'Install Playwright browsers' - script: npx playwright test displayName: 'Run Playwright tests' - task: PublishTestResults@2 displayName: 'Publish test results' inputs: searchFolder: 'test-results' testResultsFormat: 'JUnit' testResultsFiles: 'e2e-junit-results.xml' mergeTestResults: true failTaskOnFailedTests: true testRunTitle: 'My End-To-End Tests' condition: succeededOrFailed() - task: PublishPipelineArtifact@1 inputs: targetPath: playwright-report artifact: playwright-report publishLocation: 'pipeline' condition: succeededOrFailed()Here is my structure. For example I want to run createTraveler.spec.ts first, than createSingleTrip.spec.tsStructure FileI have tried writing- script: | npx jest ./Digor.Web/playwright-tests/createTraveler.spec.ts displayName: 'Run createTraveler.spec.ts' - script: | npx jest ./Digor.Web/playwright-tests/createSingleTrip.spec.ts displayName: 'Run createSingleTrip.spec.ts'but does not seem to work.
How to run tests in specific order - Playwright YAML file
Please try this:parameters: - name: dev_vault displayName: "Dev Vault Id (Same as Database Id)" type: string default: 001 ... variables: - template: vars/global.yml - name: devDatabaseName ${{ if eq(parameters.dev_vault, '001') }}: value: "DEV_PSP" ${{ else }}: value: "DEV${{parameters.dev_vault}}_PSP" ...
I'm trying to do something that seems pretty straightforward to me, I keep getting a not very useful mapping error when I try to run it (compile time error): "A mapping was not expected"The error points to the line "${{ if eq(parameters.dev_vault, '001') }}:" belowHere is my yaml pipeline:parameters: - name: dev_vault displayName: "Dev Vault Id (Same as Database Id)" type: string default: 001 ... variables: - template: vars/global.yml - name: devDatabaseName value: ${{ if eq(parameters.dev_vault, '001') }}: "DEV_PSP" ${{ else }}: "DEV${{parameters.dev_vault}}_PSP" ...Am I trying to do something that is not supported?
Azure Devops Yaml release pipeline: pipeline level conditional expression
To enable the Azure subscription in the tasks, navigate to the Stage section and click on "Unlink All" in the Parameters option as shown below:Next, proceed to the Azure App Service Deploy task and verify that the Azure subscription is now enabled.
I'm trying to create a release pipeline in my Azure DevOps account to deploy my site to a dev/test/prod environment. I Already set up the service connection and is working ok. When I am creating the pipeline and use for example the "Deploy Azure App Service" template the task option for "Azure Subscription" is disabled and I can't select my already configured connection.I know everything is ok because I already tested the connection using a build yaml pipeline with the task "AzureRMWebAppDeployment@4" and the deploy process using the connection was successful.Is there a way the option become available so I can use it in a release pipeline?
Azure DevOps Release pipeline Azure subscription field is disabled
One possible workaround is by using the v1 or v2 ofkfp==2.0.0b16. You can upgrade your pip or install the said version of kfp.pip install --upgrade pip pip install kfp==2.0.0b16
Error importing aiplatformTried following the vertex ai documentation and while running:from google_cloud_pipeline_components import aiplatform as gcc_aipI get an error: Import error: Cannot import name '_dynamic' from 'kfp.components' (/opt/conda/lib/python3.10/site_packages/kfp/components/init.py)
I am trying to create a simple vertex ai pipeline to read a gcs folder containing .parquet files using kubeflow
You can use thePipeline Create run REST APIto call the ADF pipeline from SQL script.POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DataFactory/factories/{factoryName}/pipelines/{pipelineName}/createRun?api-version=2018-06-01To call the REST API from SQL, use theOLE automation proceduressp_OACreate,sp_OAMethodwhich allows us to call the API in SQL server instance.Generate the Authentication token for the above API using Service principal. Use that token in calling the REST API with POST method. Give your credentials like subscription id, resource group name, Data factory name and pipeline name in it.You can go through thisblogby@Chandni Lakhanito understand about the OLE automation procedures.
Similar to 'sp_start_job', is there a way to execute a Data Factory Pipeline from SQL Server via a stored procedure?I tried researching and everything seemed to come back to using stored procedures within Azure Data Factory. I'm familiar with that, but I want to execute it from the SQL Server environment.It might not be possible. Another option could be to use a trigger based on data within SQL Server, but I'd rather not go that route.
Is it Possible to Call an Azure Pipeline from SQL Server
If your code was as follows:command_assembly = f'{a} {b} {c}' result = subprocess.run([self.APP_ASSEMBLY, command_assembly], capture_output=True, text=True)`You would be callingself.APP_ASSEMBLYwith only one argument, which would be the values ofa,bandcseparated by spaces.This is different from what you almost certainly want, 3 arguments, which would be accomplished like this:command_assembly = [f'{a}', f'{b}', f'{c}'] result = subprocess.run([self.APP_ASSEMBLY, ...command_assembly], capture_output=True, text=True)`The code you provided, once corrected, would then look like this:command_assembly = [f'{self.INPUT_ASSEMBLY_SRC}{self.file}', f'{self.ASSEMBLER}', f'{self.ONLY_ASSEMBLY}', f'{self.MIN_PARAMS}', f'{self.CUT_OFF}', f'{cutoff}', '-o', 'out'] result = subprocess.run([self.APP_ASSEMBLY, *command_assembly], capture_output=True, text=True)`Further explanation can be foundhere.
what's wrong with the code below:command_assembly = f'{self.INPUT_ASSEMBLY_SRC}{self.file} ' \ f'{self.ASSEMBLER} {self.ONLY_ASSEMBLY} {self.MIN_PARAMS} ' \ f'{self.CUT_OFF} {cutoff} '\ f'-o out' result = subprocess.run([self.APP_ASSEMBLY, command_assembly], capture_output=True, text=True)`In my tests, the run method seems to only capture one parameter and not the entire string that contains multiple parameters. Remembering that running in the terminal everything works perfectly with the string above.
Call a external python script
In summary, you need to abort the build without failing.How to abort?However, you are only testing thewhencondition into one single stage, this is why the stage is skipped but the build continuous.As per design, thewhenexpression for Jenkins DSL cannot be executed outside a stage.DocumentationSo you have two alternatives:Duplicate the samewhencondition in each stage.Easy but inefficient.Configure an upstream job to only very if the condition is met (changes found), if so, then call the other downstream (existing) pipelines.Complex but efficient.
I have a C# repo that has multiple Projects inside of a single solution. Each Project has their ownJenkinsfilebut they could share a Pipeline with another Project. Inside of theirJenkinsfileit gives a subdirectory. I am trying to get it to where it will only continue through the Pipeline if there have been changes to a subdirectory.stage ('Check Subdirectory for Change') { when { allOf { expression { config.Project.SubDirectory != "" } not { changeset config.Project.SubDirectory + '/**' } } } steps { script { currentBuild.result = 'SUCCESS' config.ContinueBuild = false } } }This is my current setup inside the pipeline. This will do the check and then skip the stage if there weren't any changes. The problem is that if there are no changes it will skip theCheck Subdirectory for Changestage and continue the build. I need it to exit the build inside and ONLY continue if there was changes to the subdirectory.
Ending Jenkins Pipeline Early
You must use$and:db.collection.aggregate([ { $match: { $expr: { $and: [ { $gte: [{ $hour: "$DateTime" }, 8] }, { $lte: [{ $hour: "$DateTime" }, 18] } ] } } } ])
I want a query where I can get records for the last 7 days where events occurred between 3-6 pmAnd timestamp is in unix so I tried{ $expr: { $gte: [ { $hour: "$DateTime" }, 8 ], $lte: [ { $hour: "$DateTime" }, 18 ]} } Error: An object representing an expression must have exactly one field: { $gte: [ { $hour: "$DateTime" }, 8 ], $lte: [ { $hour: "$DateTime" }, 18 ] }Any help would be appreciated.
I want to retrieve the records in Mongodb for last 7 days for specific time. Like from 3-6 pm for last 7 days
I believe you are getting this error is because you are passing the resource id as string. Resource id type is actually a GUID.Please try the following command:$assignments = Get-MgGroupAppRoleAssignment -GroupId $group.Id -Filter "ResourceId eq $($sp.Id)" | Format-List
I'm trying to get an Azure AD Role Assignment using Microsoft.Graph PowerShell module.I'm running a command below:$assignments = Get-MgGroupAppRoleAssignment -GroupId $group.Id -Filter "ResourceId eq '$($sp.Id)'" | Format-Listwhere $sp is the service principal object that was successfully found.I'm getting the next error:##[error]Invalid filter clause ##[error]PowerShell exited with code '1'.I was trying to replace the old AzureAD command Get-AzureADGroupAppRoleAssignment with the new Microsoft.Graph Get-MgGroupAppRoleAssignment.Thanks in advance!
Invalid filter clause. Microsoft.Graph
Use the below query in the pre-copy script to achieve your requirement.IF EXISTS (SELECT 1 FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = '@{pipeline().parameters.tablename}') begin TRUNCATE table [@{pipeline().parameters.schema}].[@{pipeline().parameters.tablename}]; endAnd make sure you select theAuto create tablein the copy activity sink.Here, our pre-copy script checks whether the table exists or not, if exists it truncates the table. If not exists, it won't do anything.TheAuto create tablecreates new table only when the table is not exists in the schema, if it already exists it won't do anything.Here, my table already exists with one row, so it truncated the table and inserted values from the source.
im just quite new in ADF so i would like to know how i can change the folowing expression to auto create table in my sink table.This is the current expression:This expression can truncate the existing SQL Schema and SQL table. But when i need to make new schema and table the Copy Data Activity failed. What change do i need to make in the current expression?**@{if(and(not(empty(pipeline().parameters.SQLTargetSchemaName)), not(empty(pipeline().parameters.SQLTargetTableName))), concat('truncate table [', pipeline().parameters.SQLTargetSchemaName, '].[', pipeline().parameters.SQLTargetTableName, ']'), '') }**This is the current config of my Sink page :
Azure Data Factory Expression If else auto create and truncate table
php:7.0is pretty ancient, it was end-of-lifed more than 4 years ago. It's built on Debian 9 "Stretch" which was end-of-lifed last year. The apt repositories are no longer maintained, so yourapt updateis not updating anything and yourapt installis not installing anything. You might be able to find an old pre-built Docker image that already has the needed packages installed, but you'll need to face the inevitable march regardless.If you want to keep running PHP 7.0, you'll probably have to build your own image, starting from something recent like Ubuntu 22.04 LTS and then compiling PHP 7.0 from source. It's certainly doable but would not be my recommendation. It would likely be less effort to just update your pipeline (and your app) to use PHP 7.4. Note that this is also already past end-of-life, but you'll still be able to get running images of it for a while.Sooner or later, you're going to have to update your app to PHP 8. If you're having problems doing that,Rectorcan be very helpful.
I have bitbucket pipelines failing all of a sudden due to issues with the php zip library (yaml file hasn't been touched in years and builds were successful a few weeks ago)I've tried multiple changes with the php image (using an apache version as well as an fpm) and I've tried with configuration changes to use --with pointing to the directory for zip usage but no matter what I get this on pipeline build:checking libzip... no configure: error: zip support requires ZLIB. Use --with-zlib-dir=<DIR> to specify prefix where ZLIB include and library are located 2023-05-23T14:44:58.536435778Z stdout P checking for the location of zlib...I've put a solid day and a half into this and can't seem to figure out the issue. Has anyone had this issue with their pipeline builds?Existing bitbucket-pipelines.yml:# registry for your build environment. image: php:7.0 pipelines: default: - step: caches: - composer script: # Modify the commands below to build your repository. - apt-get update && apt-get install -y unzip zlib1g-dev git - curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer - docker-php-ext-install zip - php -v - composer --version - composer install --no-scripts - vendor/bin/phpunit --version - vendor/bin/phpunit
Bitbucket pipelines keep failing on ZLIB install/discovery for php image
Yes, you can change the build description in Jenkins when triggered by a GitLab webhook. To do this, you need to add some additional parameters to your Jenkins pipeline script that will allow you to specify the custom build description.pipeline { agent any stages { stage('Build') { steps { sh 'echo "Build started"' } } } post { always { // Set custom build description based on git commit message def customDescription = sh (script: 'git log -1 --pretty=%B', returnStdout: true).trim() // Use the custom description if available, otherwise use default message currentBuild.description = customDescription ?: "Started by GitLab push by ${env.GITLAB_USER_LOGIN}" } } }
i`m trying to connect jenkins with gitlab by WebHook. when i push specific branch, webhook call jenkins pipeline. and each build has a description and that prints "Started by GitLab push by XXX".now i have a question, can i change that word by myself? i want to change that to what i want. for example, git commit message or something in Merge Request.please, let me know! sincerely.I tried to search on stackoverflow and google, but I didn`t get the proper way to control it. So I leave a note here .
how to change jenkins build description "Started by GitLab push by " statement when using gitlab
Yes I think it’s possible.I think if you change your deploy stage to check for another gitlab variable (one that you set dynamically or within a schedule) like: … TARGET_BRANCH_NAME == "master" && $CI_PIPELINE_SOURCE == "merge_request_event" && $DEPLOY == “true”Then that job should only occur when a $DEPLOY variable has been set to “true” which won’t happen when you simply merge to main, unless you set it.Then you can trigger a new pipeline run manually and add the variable and it will deploy.
I have defined pipeline in GitLab CI/CD like below:build app: stage: build variables: APP_NAME: app rules: - if: '$CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "master" && $CI_PIPELINE_SOURCE == "merge_request_event"' changes: - $APP_NAME/**/* when: manual extends: .build deploy app: stage: deploy variables: APP_NAME: app rules: - if: '$CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "master" && $CI_PIPELINE_SOURCE == "merge_request_event"' changes: - $APP_NAME/**/* when: manual extends: .deployWhen I want to merge code to master, both build and deploy stage must be manually run and they must succeed. However, I would like sometimes merge code to master after only when build stage will succeed and do not run deploy stage. Is it possible?
How to create merge request with optionally GitLab CI/CD manual pipeline?
If your gradle project reads variables only from gradle.properties then you can append those variables in your filescript: - 'echo variable_1= $variable_1 >> gradle.properties' - 'echo variable_2= $variable_2 >> gradle.properties'
I have a gradle project in which a file (setting.gradle) which requires artifactory user and artifactory password to download all dependencies.So for local we put these values in gradle.propeties file and it works well. So I put these artifactory user and artifactory password in gitlab variable (setting -> CI/CD -> variable-> Add variable). But now my setting.gradle unable to fetch these value. After some research I found out that these variable are not gobal. Can some one help me how put these value as gobal or let me know if I am doing something wrong. Thanks in advance for help.
how to set global environment variable in gitLab ci/cd pipeline?
Maybe it's just a typo? YourReelaseshould't beRelease?
In Jenkin pipeline, I read a file and save that to a variable. Then using the contains(), I wanted to verify whether a particular string exits in the variable. Actually, it exists but the code is not getting into if block.Any idea? I can print the entrie file as string content hence string assignment from file read also working . Some issue with contains logic .Please provide your thoughts..pipeline { agent any stages { stage('read prop file') { steps { echo "1" powershell 'C:\\JenKin_Jobs\\NetUSeIDrive.bat' script { env.filename="C:/Temp/JenkinConsoleLog/$JOB_NAME/GenerateSQA_138.log" env.logtostring=readFile(file: filename) //echo "${env.logtostring}" if (("${env.logtostring}".contains('Reelase is 861'))) { echo "Match found " } } } } } }
Contains looks not working in jenkin pipeline
There's no native solution for this. However, I've approximated something like this usingpre-commithooks.Basically, there is apre-commithook script that will generate the possible choices and edit thevariables:key in the.gitlab-ci.ymlto contain all the appropriate options whenever they change. I also setuppre-commit autofixingin the GitLab pipeline so you never have stale options.
I’m trying to create a parent child pipeline on Gitlab using gitlab-ci but I need to share variables from parent to child pipeline. I’m using the “options” capability to have a drop down menu with pre-filled values just to avoid TYPO while filling the variable. Is there any way to create the options starting from a bash script?For example:variables: - myVar options: script: - find -type f | grep microservicesThank you! Cheers
Dynamically create gitlab-ci variables starting from a script
+50I'm not sure whether I understood your question correctly, but if you're asking whether it's possible to provide a query you mentioned into compass, then yes. There are 2 ways:Provide each stage separately as in the screenshot below:Provide whole aggregation query as array viaCreate New=>Pipeline From Text:
I'm reading through the mongodb pipeline documentation athttps://www.mongodb.com/docs/compass/current/aggregation-pipeline-builder. Based on this in mongodb compass gui , I can create , name and save a pipeline. However to get the data I want, I would ideally like to run 2 separate queries from the original collection, then merge the results producing a 'Y' shaped pipeline. you can do something like:db.collection.aggregate([ { "$match": { "PENALTIES": { $ne: "$0.00" } } }, { "$unionWith": { "coll": "collection", "pipeline": [ { $match: { $and: [ { "PENALTIES": { $eq: "$0.00" } }, { "INTEREST": { $ne: "$0.00" } } ] } } ] } }, { "$merge": { "into": "new-collection" } } ])but is this possible in mongo compass?
Combining pipelines in mongodb compass gui
Availability ofsourcedepends on the actual shell jenkins uses. As mentioned in the comments, you probably should use.instead ofsource.However; activation and deactivation of venv is related to the scope ofsh.If you have multiple steps and multiplesh's you need to activate the venv for each of those where you need to access python libs/tools installed into venv. This is because activation is applied the current shell and if you activate it on onesh, once that stops, venv settings are automatically deactivated because nextshwill spawn a new shell.
I'm using declarative pipeline for jenkins and my goal is to have a separated virtual environment for each project in jenkins/workspace. I've created a new pipeline with the following code:pipeline { agent any stages { stage('Pull all changes') { steps { git branch: "master", url: "[email protected]" } } stage('Create and Activate venv') { steps { sh 'python3.11 -m venv venv' sh 'source venv/bin/activate' } } } post { always { sh 'deactivate' } } }However I'm getting this error:When I try to do the same steps via SSH everything is working fine and venv is activated. Also I don't know if it's important but I have 2 version of the python on the server (python3.9, python3.11)
Jenkins Declarative Pipeline - Activating python venv
You can includor:{ "$match": { "$and": [ { "is_active": { "$eq": True } }, { "is_delete": { "$eq": False } }, { "$or": [ { "abc_id": { "$eq": abc } }, { "xyz_id": { "$in": [xyz] } } ] } ] } }
{ "$match": { "$and": [ { "is_active": { "$eq": True }},{ "is_delete": { "$eq": False }}] } }This is my match condition for lookup aggregation and it is working fine.Now I need to add two more conditions and these are not mandatory fields they may benull or not, I think I needorcondition inside, withandcondition as top query when I passabcandxyz, it is working, if I didn't pass both or anyone its failing (not returning proper data).{ "abc_id" : { "$eq": abc } },{ "xyz_id" : {"$in": [xyz]} }How can I achieve this? Thanks for inputs.
MongoDB aggregation with 'and' and 'or' and 'in' conditions
I created linked service for Azure SQL database and blob storage account and created dataset of SQL database for source:Dataset of blob storage for sink:When I preview the data by entering file name I got below error:I got above error because I am not having that file in my blob storage. In data factory the file will create automatically while debug the pipeline without entering filename. I just gave the file path where my parquet file need save in sink dataset and debug the pipeline, it executed successfully.My SQL table is copied to blob storage as parquet file successfully.
I am new to Azure. I created a new ADF, pipeline, storage blob account and a copy data activity, the source is from a SQL server table and the sink output is a parquet file. But when I preview the data of my sink dataset, I got an error saying the required blob is missing.I want to create a directory as well but weather I type in the folder name and file name or using parameters, I still receive the error. If I manually upload a file via the Azure Storage Explorer, the preview will have no issue.Anyone what I missed?Thanks for the help.cheers Albert
Required Blob is missing when preview data of a copy data activity sink dataset
actually according to docs they should be available as env variables:https://learn.microsoft.com/en-us/azure/devops/pipelines/process/set-secret-variables?view=azure-devops&tabs=yaml%2Cbash#use-a-secret-variable-in-the-uienviron.get('DB_USER')edit: repro:python -c "import os, base64; print(base64.b64encode(bytes(os.environ.get('TEST_PLAIN'), 'ascii')))"
We are running integration tests, written in Python, in Azure Pipeline. These tests access a database, and the credentials for accessing the database are stored in a variable group in Azure, including secret variables. This is the part of the yaml file, where the integration tests are started:jobs: - job: IntegrationTests variables: - group: <some_variable_group> - script: | pdm run pytest \ --variables "$VARIABLE_FILE" \ --test-run-title="$TEST_TITLE" \ --napoleon-docstrings \ --doctest-modules \ --color=yes \ --junitxml=junit/test-results.xml \ integration env: DB_USER: $(SMDB_USER) DB_PASSWORD: $(SMDB_PASSWORD) DB_HOST: $(SMDB_HOST) DB_DATABASE: $(SMDB_DATABASE)The problem is, that we cannot read the value of SMDB_PASSWORD, as it is a secret variable. In order to use the secret variables, it is advised to use arguments in a PythonScript task (like here:Passing arguments to python script in Azure Devops) but i am not aware how to modify this script to be defines PythonScript, as it includes using pdm.
Passing azure secret variables to pytest in pipeline?
The issue is with the format of the variables section in the .gitlab-ci.yml file. According to the error message, the configuration of variables should be in key-value format, but the given value is in an incorrect format.To resolve the issue, try changing the format of the ver variable to a key-value pair, like so:variables: ver: "13"not: variables: ver: value: "13"
I have errors when pushing it to the repo. Error says:Found errors in your .gitlab-ci.yml: variables config should be a hash of key-value pairs value can be a hashWhat is wrong here?The code:stages: - build_android - build_ios - run_test variables: ver: value: "13" options: - "13" - "13.1" - "13.2" - "13.2.1" - "14.0.1" build_android: stage: build_android script: - flutter build apk build_ios: before_script: - export DEVELOPER_DIR=/Applications/Xcode_${{ver}}.app/Contents/Developer stage: build_ios script: - xcode-select -s $DEVELOPER_DIR - flutter build ios --no-codesign run_test: stage: run_test script: - flutter test
Gitlab CI pipeline for iOS for Flutter project
You can build a delay step into your pipelines yaml file between the setup of your docker image and your test execution.# Delay v1 # Delay further execution of a workflow by a fixed time. - task: Delay@1 inputs: delayForMinutes: '0' # string. Required. Delay Time (minutes). Default: 0.https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/reference/delay-v1?view=azure-pipelines
I am using Maria DB docker image for integration tests. I start it in an Azure pipeline via the following commands:docker pull <some_azure_repository>/databasedump:<tag_number> docker run -d --publish 3306:3306 <some_azure_repository>/databasedump:<tag_number>And after that integration tests, written in Python, are started. But when the code tries to connect to the Maria DB database, mysql error is returned.+ 2013 (HY000): Lost connection to MySQL server at 'reading initial communication packet', system error: 0Maybe the reason for that is, that the Maria DB database is big and it needs some seconds to be started. So my question is whether there is a way to set a sleep of several second in a pipeline execution? In a script or bash section.
How to pause azure pipeline execution for several seconds?
Azure Synapse: Show where Integration Pipeline is calledIn synapse pipeline, Select the pipeline.Clickpropertiesin that pipeline.And then selectRelated.Pipelines which depends on the selected pipeline will be listed.
I have a hugeAzure Synapseproject with visual integration pipelines. How can I see in which other pipelines a selected pipeline is called/triggered - how can I find the callers?
Azure Synapse: Show where Integration Pipeline is called
Theshnode is a proper Jenkinsbuilt-in stepthat runs the command in the job.The.execute()is a bit of a hack :)You're running the Groovy script through the Jenkins interpreter, which is not pretty standard and does it's own thing.I'd avoid it and keep with the Jenkinsshthat has a standard behaviour.
I have a Jenkins that runs in a container. I was trying to debug a groovy file that is running in Jenkins pipeline and found out it is not executing from the workspace for some reason. Below is the Jenkins pipelinepipeline { agent any stages { stage('testing') { steps { script { sh ''' ls ''' def proc = [ "ls"].execute() def output = proc.text println(output) } } } } }The shell command returns listing of the checked out repository, as expected. However same command executed in groovy script shows container root filesystem. That's not what I expected. What is going on here?Pipeline] } [Pipeline] // stage [Pipeline] withEnv [Pipeline] { [Pipeline] stage [Pipeline] { (testing) [Pipeline] script [Pipeline] { [Pipeline] sh + ls README.md docs jenkins modules scripts [Pipeline] echo aws bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var vault
Jenkins Pipeline Groovy script not executing in workspace
You can put the python script file inAzure Git Repositoryand use the repository as the pipeline source. Then addRun a Python scripttaskin pipeline and specify the script path like below.Torun Python scriptsin your repository, you can also use a script element and specify a filename in YAML pipeline. For example:- script: python src/example.py
I am new to DevOps and this whole world, so I would like some help.I have an existing pipeline created. I also have a python script that I have just created, this script gets a json file, replace some stuff and replace the file content (I tested with a local file). Now I need to add this python code to my pipeline tasks, this task should copy a repository file, and process it.I Googled trying to find a way of achiving this, but nothing. I do not even know where to place the python script.Would anyone be able to point me to some direction?Thank you very much.
Run python script in a CI/CD step
Refer to this doc:Enable upstream sources in an existing feedCustom public upstream sources are only supported with npm registries.The cause of the issue is that custom public upstream sources are only supported with NPM registries. This is the limitation of Azure Artifacts.For a workaround, to use multiple private nuget sources, you can define the nuget resource in Nuget.config file and create nuget Service connection inProject Settings -> Service connections.Then you can use the service connections in nuget restore task.For example:Update:Nuget Restore task sample:- task: NuGetCommand@2 displayName: 'NuGet restore' inputs: feedsToUse: config nugetConfigPath: $(build.sourcesdirectory)/targetfolder/nuget.config externalFeedCredentials: 'nuget1027, test nuget feed'
In Azure pipelines, the Nuget Restore task allows me to specify a feed in my organization. However, that feed can apparently only have one upstream source per type (nuget, npm, etc.), and that upstream source is nuget.org. I have another nuget feed that I need to pull in as well, but since we're only allowed to have one upstream nuget source for our custom feeds, I'm not quite sure how best to tackle this.edit: Actually, even after removing nuget.org as the upstream branch, I still can't add another upstream branch aside from NPM.. I thought it was unable to be selected because I already had Nuget.org included.
Azure Devops restoring from multiple private nuget sources (same type)
Did you trytree.plot_tree(clf)Or maybetree.plot_tree(clf.named_steps['classifier'])?The error says thatclfobject cannot be accessed with['something']notation.
I try to visualize a decision tree after a pipeline.Here is my code:num_pipeline = Pipeline(steps=[ ('impute', SimpleImputer(strategy='mean')), ('scale', MinMaxScaler()) ]) cat_pipeline = Pipeline(steps=[ ('impute', SimpleImputer(strategy='most_frequent')), ('one-hot',OneHotEncoder(handle_unknown='ignore', sparse=False)) ]) from sklearn.compose import ColumnTransformer preprocessor = ColumnTransformer(transformers=[ ('num_pipeline',num_pipeline,num_cols), ('cat_pipeline',cat_pipeline,cat_cols) ], remainder='drop', n_jobs=-1) from sklearn.linear_model import LogisticRegression from sklearn import metrics from sklearn import tree clf = Pipeline(steps=[ ('preprocessor', preprocessor), ('classifier', tree.DecisionTreeClassifier()) ]) from sklearn import metrics clf.fit(X_train, y_train) # preds = clf_pipeline.predict(X_test) model = clf.score(X_test, y_test) print(f"Model score: {model}") # accuracy tree.plot_tree(clf['classifier'])But, I get a error which is: TypeError: 'DecisionTreeClassifier' object is not subscriptable.How can I fix it?I think everything is done properly but I still get the error, and I do not know how to fix it.
'DecisionTreeClassifier' object is not subscriptable
As a workaround, you can trigger the needed build through REST API. Check this:Powershell to trigger a build in Azure DevOps
I want to trigger a Test pipeline from a stage of Main pipeline, both the pipelines are present in different projects within the same organization. I am able to trigger the pipeline using the resource option but the problem is it triggers the Test pipeline when Main pipeline finishes successfully but I want to trigger the Test pipeline in between run of Main pipeline using an stage. Is it possible to achieve this using any feature of Azure Devops?For now I am adding this resource in Test pipeline yaml to trigger after Main pipeline.resources: pipelines: - pipeline: Test-Repo source: Test # Test pipeline from different project project: private trigger: true # enable the trigger
How to trigger Azure devops pipeline from different project within same organization using stage/resource?
You can always deploy previous versions of your application in different release or buildYou should have a quality ansurance environment before production environment so as to check if new changes will workIf you want to combine rollback deployment inside the same build you can use stage conditions to add new stage which will run only if previous stages failCheck failed() condition and combine it with 'and', 'or' keywordshttps://learn.microsoft.com/en-us/azure/devops/pipelines/process/stages?view=azure-devops&tabs=yaml#conditions# stage B runs if A fails - stage: B condition: failed()
This is a general question that I have been having for couple of days now and after hours of searching google I am still not sure how it works.Say I have a single pipeline to look for my IaC code change, deploy if there are any changes, and also then build the code and then deploy to the same infrastructure created in the step before.So, it will look something like: PipelineStep1/stage 1:Look for changes in the IaC code (Terraform) and then deploy if there are any changes to .tf filesstep2/stage2:Build the npm applicationstep3/stage3:Run the testsstep4/stage4:deploy the built application to the Infrastructure.Now let's say the if the application fails to build (step2) or if the tests (step3) fail, how do we deal with the infrastructure rollback?
How to deal with IaC code (Infrastructure part of build pipeline) when the pipeline fails
Have a look at therules blockandcommon if clauses for rules.For your specific question adding the following should work:rules: - if: $CI_COMMIT_BRANCH =~ /^release/
I am working with gitlab ci and I need the job to run only when the branch is release/ and if it is any other branch the job will not run.I have tried to do it in many ways but none of them works, I would appreciate if you could help me, because I don't know what else to try.
How to prevent a job from running when the branch is different from release/
Does anybody know if there is a alternative for azure devops pipelines?If the alternative you mentioned means some tasks in Azure DevOps pipeline can do the similar thing as 'ankane/setup-mariadb@v1' in GitHub,then the answer is NO.DevOps doesn't have a 'build_in' task like this, even the marketplace also doesn't have a extension to do this.So you have two ways:1, If your pipeline based onMicrosoft hosted agent, everything should be set up via command:How to Install and Start Using MariaDB on Ubuntu 20.042, If your pipeline based onself hosted agent, then you can 'set up' the environment(MariaDB) before start the pipeline. And then use it in your DevOps pipeline.
I want to migrate my github action pipeline to azure devops, unfortunally i wasn't able to find an alternative to the github action "ankane/setup-mariadb@v1". For my pipline I need to create a local mariadb with a database loaded from a .sql file. I also need to create a user for that database. This was my code in my github pipeline:- name: Installing MariaDB uses: ankane/setup-mariadb@v1 with: mariadb-version: ${{ matrix.mariadb-version }} database: DatabaseName - name: Creating MariaDB User run: | sudo mysql -D DatabaseName -e "CREATE USER 'Username'@localhost IDENTIFIED BY 'Password';" sudo mysql -D DatabaseName -e "GRANT ALL PRIVILEGES ON DatabaseName.* TO 'Username'@localhost;" sudo mysql -D DatabaseName -e "FLUSH PRIVILEGES;" - name: Importing Database run: | sudo mysql -D DatabaseName < ./test/database.sqlDoes anybody know if there is a alternative for azure devops pipelines?Cheers,
Azure DevOps Pipeline local MariaDB
I don't think there is a general way to break down aggregation stage(s). But for your specific case, your query seems to be trying to perform a common 'lookup-then-combine' operation. The query performs 2 lookups tousersandusertypescollections and 'links' up the lookup results together by ids inside.Anarguablymore comprehensible way to perform the same behaviour would be:Precaution: You can see there is a sub-pipeline inside sub-pipeline. Usually it would be a bad idea as it would make the pipeline complicated and hinder readability. However for your case, it may be ok as we can keep it simple to 1 single$limitstage only. We avoid excessive$lookupto gain some performance.db.workoutDetailSchema.aggregate([ { "$lookup": { "from": "users", "localField": "REF_Users", "foreignField": "_id", "pipeline": [ { "$lookup": { "from": "usertypes", "localField": "REF_UserType", "foreignField": "_id", "pipeline": [ { $limit: 1 } ], "as": "userType" } }, { "$unwind": "$userType" } ], "as": "people" } } ])Mongo Playground(some wrangling stages are omitted to keep it simple for demo purpose)You can see a sub-pipeline is used to perform another lookup and handle the 'linkage'. Potentially it could be more performant, as it doesn't need to iterate through the lookup arrays.
I have thismongo playgroundI would like to break down thisprojectinto small chunks so I can see what's happening in the pipeline"$project": { "people": { "$map": { "input": "$peopleLookup", "as": "tempPeople", "in": { "$mergeObjects": [ "$$tempPeople", { "userType": { "$first": { "$filter": { "input": "$userTypeLookup", "cond": { "$eq": [ "$$tempPeople.REF_UserType", "$$this._id" ] } } } } } ] } } } }
Is there a way to break this 'project' portion of a pipeline down into smaller pieces?
If you want to run a regular expression outside your usual language/runtime, you should get used to thegreputility.A naive approach for your case:TICKET_NAME=$(echo $BRANCH_NAME | grep --only-matching --extended-regexp 'DOM-[0-9]+')(you might need to workout the actual ticket name regex)Friendly reminder: the branch name is accessible in the$BITBUCKET_BRANCHenvironment variable (for branch pipelines, no custom nor tag-triggered pipelines)
I have a bitbucket pipeline where I want to retrieve some part of the branch name (the ticket name)I basically get the branch name with :- BRANCH_NAME="$(git rev-parse --abbrev-ref HEAD)"Now the branch name is likefix/DOM-123-my-ticket-infoI would like to basically do something like- BRANCH_NAME="$(git rev-parse --abbrev-ref HEAD)" - TICKET_NAME= // a way to retrieve DOM-123 from $BRANCH_NAMEDo you know how would this be possible ?I haven"t find anything on how such a thing would be possible.
How to execute regex inside a bitbucket pipeline?
Resolved the issue. Please find the solution below and working drone pipeline.kind: pipeline type: docker name: data-importer steps: - name: restore-cache image: meltwater/drone-cache pull: if-not-exists settings: backend: "filesystem" restore: true ttl: 1 cache_key: "volume" archive_format: "gzip" mount: - ./.m2/repository volumes: - name: cache path: /tmp/cache - name: maven-build image: maven:3.8.6-amazoncorretto-11 pull: if-not-exists commands: - mvn clean install -DskipTests=true -Dmaven.repo.local=.m2/repository -B -V volumes: - name: cache path: /tmp/cache - name: rebuild-cache image: meltwater/drone-cache pull: if-not-exists settings: backend: "filesystem" rebuild: true cache_key: "volume" archive_format: "gzip" ttl: 1 mount: - ./.m2/repository volumes: - name: cache path: /tmp/cache trigger: branch: - main - feature/* event: - push volumes: - name: cache host: path: /var/lib/cache
I'm new to Drone pipeline and is interested to use it in my current project for CICD. My project tech stack is as follows:JavaSpring BootMavenI have created a sample drone pipeline, but not able to cache the maven dependencies which is downloaded and stored in .m2 folder. Always say the mount path is not available or not found. Please find the screen shot for the same:Drone mount path issueNot sure of the path to provide here. Can someone help me to understand the mount path which we need to provide to cache all the dependencies in .m2 path. Adding the pipeline information below:kind: pipeline type: docker name: config-server steps: name: restore-cache image: meltwater/drone-cache pull: if-not-exists settings: backend: "filesystem" restore: true cache_key: "volume" archive_format: "gzip" mount: - ./target - /root/.m2/repository volumes: name: cache path: /tmp/cache name: build image: maven:3.8.3-openjdk-17 pull: if-not-exists environment: M2_HOME: /usr/share/maven MAVEN_CONFIG: /root/.m2 commands: mvn clean install -DskipTests=true -B -V volumes: name: cache path: /tmp/cache name: rebuild-cache image: meltwater/drone-cache pull: if-not-exists settings: backend: "filesystem" rebuild: true cache_key: "volume" archive_format: "gzip" mount: - ./target - /root/.m2/repository volumes: name: cache path: /tmp/cache trigger: branch: main event: push volumes: name: cache host: path: /var/lib/cacheThanks in advance..
Drone Pipeline : Drone Cache mount path for Maven Repository not able to resolve
There are two approaches for GitLab.Cachepip. ReadCache Python dependencies.The most important is to specifypipcache path:variables: PIP_CACHE_DIR: "$CI_PROJECT_DIR/.cache/pip" cache: paths: - .cache/pipBuild a Docker image with all dependencies included. You can use your Docker image in GitLab CI/CD.Building Docker images is a large topic and a bit out of scope of the current question. You can start withUse Docker to build Docker imagesdocumentation.
Problem summaryWhen runningmypyon my code, I keep getting manyLibrary stubs not installederrors.Few examples below:/opt/conda/envs/my_ci/lib/python3.9/site-packages/ray/core/generated/agent_manager_pb2.py:5: error: Library stubs not installed for "google.protobuf.internal.enum_type_wrapper" (or incompatible with Python 3.9) /opt/conda/envs/my_ci/lib/python3.9/site-packages/torch/utils/tensorboard/summary.py:8: error: Library stubs not installed for "six.moves" (or incompatible with Python 3.9) /opt/conda/envs/my_ci/lib/python3.9/site-packages/ray/rllib/algorithms/algorithm.py:28: error: Library stubs not installed for "pkg_resources" (or incompatible with Python 3.9)Currently I have to use the commandmypy --install-types --non-interactive my_folder --config-file=mypy.ini.And although it solves the issue, the problem now is that it takes at least 2 min for installing missing types. This is very long for our CI/CD pipeline.QuestionWhat are alternative ways of addressing missing library stubs?E.g., such that I could maybe 'split' mypyinstall types(or other solution that is more time-consuming and potentially can be put as part of docker image), from purerun mypycommand (that takes less time and runs as part of gitlab ci/cd pipeline).I tried runningmypy --install-typescommand first, and thenrun mypywithout success. Could it be that I am doing something wrong?I will appreciate any help and ideas!
How to tackle pipeline slow down due to `mypy --install-types`
$unwindstage isn't suitable for your scenario.Instead, you need a$projectstage instead of$unwindstage.{ $project: { _id: 0, A: "$_id.A", B: "$_id.B", sum: 1 } }Demo @ Mongo Playground
I want to unwind a group of fields but seems like unwind doesn't work on_idfield. I want to unwind the output because I want to write it in a new collection.Here is an example showing what I want to do:Some data:db.test.insertOne({A: "A1", B: "B1", num:1}) db.test.insertOne({A: "A2", B: "B2", num:2}) db.test.insertOne({A: "A1", B: "B2", num:3}) db.test.insertOne({A: "A2", B: "B1", num:4}) db.test.insertOne({A: "A1", B: "B1", num:5}) db.test.insertOne({A: "A2", B: "B2", num:6}) db.test.insertOne({A: "A1", B: "B2", num:7}) db.test.insertOne({A: "A2", B: "B1", num:8})My aggregation stage:db.test.aggregate([ { $group: { _id: {A:"$A",B:"$B"}, sum: { $sum: "$num" } } }, { $unwind: _id } ])The output:{ _id: { A: 'A1', B: 'B2' }, sum: 10 } { _id: { A: 'A2', B: 'B2' }, sum: 8 } { _id: { A: 'A1', B: 'B1' }, sum: 6 } { _id: { A: 'A2', B: 'B1' }, sum: 12 }My desired output:{ A: 'A1', B: 'B2', sum: 10 } { A: 'A2', B: 'B2', sum: 8 } { A: 'A1', B: 'B1', sum: 6 } { A: 'A2', B: 'B1', sum: 12 }I tried the following unwind code but it doesn't affect the result:db.test.aggregate([ { $group: { _id: {A:"$A",B:"$B"}, sum: { $sum: "$num" } } }, { $unwind: "$_id" } ])
MongoDB - Unwind the _id field in an aggregation stage
'{ var = "value" cmd = "echo -n \047" var "\047 | xxd -r -p | sha256sum | cut -d \" \" -f 1" var = ( (cmd | getline line) > 0 ? line : "Failed: " var ) close(cmd) print var }'but it's extremely unlikely that this is the best way to do whatever it is you're trying to do. I'm settingvarto"Failed: " varif the pipeline to getline fails, you'll have to decide how you really want to handle it. Seehttp://awk.freeshell.org/AllAboutGetlinefor when/how to usegetlineand it's many caveats.
I want to do something like this:'{ var = "value"; "echo -n "$var"|"xxd -r -p"|"sha256sum"|"cut -d \" \" -f 1"|getline var; }'I guess in awk I can't use piplines this way, so how can this be implemented in awk?
Multiple pipelines in AWK
The problem with your code is that items are split into pipeline components, each of which has a different copy of the shell running it in parallel; so a singleifstatement can't have itsfisplit off into a different pipeline component.You can't make a conditional pipeline entry easily, but youcaneasily make a pipeline entry decide whether to run a command that does something (likesort) or one that doesn't do anything at all (likecat).generate_data | if [ "$want_sorting" = 1 ]; then sort -k1,1 else cat fi | consume_dataNow, the slightly less-easy approach:To get rid of this copy ofcatone can use a shell function that only conditionally creates the pipeline component withsortbefore running whatever was supposed to come later in the pipe; something like:maybe_sort_then() { if [ "$want_sorting" = 1 ]; then sort | "$@" else "$@" fi } generate_data | maybe_sort_then consume_data
I want to conditionally addsortto a shell pipeline, but I'm getting a syntax error when I try:want_sorting=0 generate_data() { for ((i=0; i<10; i++)); do echo "$RANDOM"; done; } consume_data() { while IFS= read -r line; do echo "got data: $line" done } generate_data | if [ "$want_sorting" = 1 ]; then sort | fi consume_dataHowever, this throws a syntax error:syntax error near unexpected token `fi'How can I addsort |to the pipeline only ifwant_sortingis set?
Conditionally add a component to a shell pipeline
This is the syntax to use:pwsh returnStdout: true, script: ''' $cred = New-Object -TypeName System.Management.Automation.PSCredential -argumentlist $env:CRED_APP_CATALOG_USR , $(convertto-securestring $env:CRED_APP_CATALOG_PSW -asplaintext -force) Connect-PnPOnline -Url $env:app_catalog_path -Credentials $cred Add-PnPApp -Path "./sharepoint/solution/samplesolution.sppkg" -Publish -Overwrite '''Alternatively you may use parentheses:pwsh( returnStdout: true, script: ''' $cred = New-Object -TypeName System.Management.Automation.PSCredential -argumentlist $env:CRED_APP_CATALOG_USR , $(convertto-securestring $env:CRED_APP_CATALOG_PSW -asplaintext -force) Connect-PnPOnline -Url $env:app_catalog_path -Credentials $cred Add-PnPApp -Path "./sharepoint/solution/samplesolution.sppkg" -Publish -Overwrite ''')With both syntaxes, you may store the output in a variable for further processing:def stdout = pwsh returnStdout: true, script: ''' $cred = New-Object -TypeName System.Management.Automation.PSCredential -argumentlist $env:CRED_APP_CATALOG_USR , $(convertto-securestring $env:CRED_APP_CATALOG_PSW -asplaintext -force) Connect-PnPOnline -Url $env:app_catalog_path -Credentials $cred Add-PnPApp -Path "./sharepoint/solution/samplesolution.sppkg" -Publish -Overwrite ''' echo stdout
I currently have a sample pipeline that works but in the Jenkins output does not return the confirmation that the script worked.pipeline { agent any environment { CRED_APP_CATALOG = credentials('id-app-tenant') } stages { stage('ConnectToApp'){ steps { script{ pwsh ''' $cred = New-Object -TypeName System.Management.Automation.PSCredential -argumentlist $env:CRED_APP_CATALOG_USR , $(convertto-securestring $env:CRED_APP_CATALOG_PSW -asplaintext -force) Connect-PnPOnline -Url $env:app_catalog_path -Credentials $cred Add-PnPApp -Path "./sharepoint/solution/samplesolution.sppkg" -Publish -Overwrite ''' } } } } }I know I need to add the returnStdout frompwsh scriptsome where, but I cant seem to find the syntax any where.
syntax to include Jenkins pipeline optional argument
Yes, you can, Check the sample below.pipeline { agent any parameters { booleanParam(name: 'FLAG', defaultValue: FLAG, description: '') } stages { stage('Build') { steps { script { echo "Build" } } } } }
Is it possible to set the default value of a choice parameter dynamically to TRUE or FALSE through pipeline script with an input(TRUE/FALSE of that parameter itself)So that in the next execution no need to set the default value again it should take the value from the previous execution until there is a change in choice once again.
Set default value of job param dynamically in Jenkins pipeline
found the solution:sh '#!/bin/tcsh \n' + 'source ./mysettings.sh \n' + 'echo "Calling my alias" \n' + 'my_alias \n'every line starting with sh launches a new shell, so it has to be in one line including line breaks.further adding to the confusing was that documentation of jenkins says that it starts "a bash" but it launched /bin/sh which in my case pointed to something else
I have a legacy project in Jenkins that hast to be pipelined (for later parallelization), hence moving from simple tcsh script to pipelinerunning the script as#!/bin/tcsh source ./mysetting.sh updateworks but the same pipeline step fails due to missing alias expansionstage ('update') { steps { //should be working but alias expansion fails sh 'tcsh -c "source ./mysettings.sh; alias; update"' //manually expanding the alias works fine sh 'tcsh -c "source ./mysettings.sh; alias; python update.py;"' } }calling alias in the steps properly lists all the set aliases, so I can see them, but not use them.I know in bash alias expansion has to be set#enable shell option for alias_expansion shopt -s expand_aliasesbut in csh/tcsh that should be taken care of by source.what am I missing?
Jenkins Pipeline Groovy script tcsh alias expansion
You could use theManual Validation Task.Use this task in a YAML pipeline to pause a run within a stage, typically to perform some manual actions or validations, and then resume/reject the run.jobs: - job: waitForValidation displayName: Wait for external validation pool: server timeoutInMinutes: 4320 # job times out in 3 days steps: - task: ManualValidation@0 timeoutInMinutes: 1440 # task times out in 1 day inputs: notifyUsers: |[email protected][email protected]instructions: 'Please validate the build configuration and resume' onTimeout: 'resume'
I have two stages in my Azure DevOps pipeline. One with Pulumi Preview (let's call it Preview) and one with Pulumi Up (Up) in order to run my infrastructure as code.Both run from the same container and it takes a while to pull it. I want to manually approve the Preview before the implementation.Can I pull and run the container for both stages simultaneously but wait for the last job of the UP-Stage until the Preview-Stage is approved?Currently they depend on eachother as follows:trigger: - master pool: vmImage: 'ubuntu-latest' stages: - stage: Pulumi_Preview jobs: - job: Preview container: image: REGISTRY.azurecr.io/REPOSITORY:latest endpoint: ServiceConnection steps: - task: Pulumi@1 displayName: pulumi preview inputs: azureSubscription: 'Something' command: 'preview' args: '--diff --show-config --show-reads --show-replacement-steps' stack: $(pulumiStackShort) cwd: "./" - stage: Pulumi_Up displayName: "Pulumi (Up)" dependsOn: Pulumi_Preview jobs: - job: Up container: image: REGISTRY.azurecr.io/REPOSITORY:latest endpoint: ServiceConnection steps: - task: Pulumi@1 inputs: azureSubscription: 'Something' command: 'up' args: "--yes --skip-preview" stack: $(pulumiStackShort) cwd: "./"
Run two stages in Azure DevOps Pipeline "partially" parallel
Thedefault codecfor an output is plain, and the if the format option is not specified then that willcall the .to_s methodon the event. The .to_s methodaddsthe timestamp and %{host}. You can prevent this by addingcodec => plain { format => "%{message}" }to your syslog output.
I am using logstash pipeline to ingest data into rsyslog server .But the pipeline is adding extra date stamp and %{host} at the beginningexample:**Sep 22 04:47:20 %{host} 2022-09-22T04:47:20.876Z %{host}** 22-09-2022 05:47:20.875 a7bd0ebd9101-SLOT0 `TEST-AWS-ACTIVITY`#011970507 P 201059147698 `[FCH-TEST] [35.49.122.49] [TEST-251047********-******] [c713fcf9-6e73-4627-ace9-170e6c72fac5] OUT;P;201059147698;;;;/bcl/test/survey/checkSurveyEligibility.json;ErrorMsg=none;;{"body":{"eligible":false,"surveys":[]},"header":null}`**Can anyone tell from where this extra part is coming and how to suppress this .The data is coming from AWS cloudwatch installed on ECS containers.The pipeline is configured as :input { pipeline { address => test_syslog } } filter { if [owner] == "1638134254521" { mutate { add_field => { "[ec_part]" => "AWS_TEST"} } } } output { #TEST ACTIVITY Logs being sent via TCP to Logreceiver if [ec_part] == "AWS_TEST" { syslog { appname => "" host => "10.119.140.206" port => "10514" protocol => "ssl-tcp" ssl_cacert => "/etc/logstash/ca.crt" ssl_cert => "/etc/logstash/server.crt" ssl_key => "/etc/logstash/server.key" priority => "info" rfc => "rfc5424" } } }
Logstash pipeline adding extra timestamp%{host} in beginning of rsyslog message
I resolved this issue byidentify and close the open handlers by--detectOpenHandles.Divide the test cases in chunks using--matchPathPatternand run those chunks parallel in GitLab pipeline. Generate separate coverage reports and merge using Istanbul-merge.
When I run test cases in local. All the test cases are completed within time limit(5000). But, When I run those test cases in gitlab pipeline then it will consume more time.I use gitlab version 10.
Why Jest cases takes much more time to run inside the Gitlab pipeline (Throw a timeout error inside pipeline)?
The way I found to solve this problem was adding to the test project structure the generation of an xml with the information I needed. With the generated xml I was able to access it through a Powershell task and then save the variables as needed.
I'm running the task below in Azure Pipelines and I needed to access the percentage of tests passing and failing as a result of running this task in another task.Is this possible to be done? Can I access this result through a variable?- task: DotNetCoreCLI@2 displayName: Execute Tests inputs: command: 'test' projects: '**\BaseProject.dll' workingDirectory: '$(System.DefaultWorkingDirectory)'Please can someone help me? Thanks.
Is it possible to access the test execution result in a task in Azure Pipelines through a variable?
you need to add a service connection since you are communicating with the other azure services.you can add service connection in portal it is under the project settings.Select service connection and after that popup will appear on which you have to select docker registryFill the subsequent form about the subscription and ACR detailsAnother approach would be to use docker to pull imagesRefer thisdocumentationabout it.
need some help. I created a custom image that pushes to Azure Container Registry and use it as a based image for the Azure DevOps pipeline, but it seems it does not pull the image.azure-pipelines.ymljobs: - job: rf container: image: test.net/rethrf:8944OutputError response from daemon: Head "https://test.net/v2/rethrf/manifests/8944": unauthorized: authentication required, visit https://aka.ms/acr/authorization for more information. ##[warning]Docker pull failed with exit code 1, back off 2.89 seconds before retry.any solution to this? or did I miss any environment variables?
Azure Devops : unauthorized: authentication required,
You can permanently persist all messages sent to a SQS queue by adding a SNS topic in front of it that has an additional subscription. This would be a Kinesis Data Firehose subscription that has a S3 bucket configured as it's destination.That way, any message published to the SNS topic would get sent both to the S3 bucket and the SQS queue. Your SQS DLQ would work the same way as usual, and you could provide a way for clients to query failed messages from the S3 bucket at a later date (potentially much later than the maximum retention policy SQS allows).See the below links for more information:https://docs.aws.amazon.com/sns/latest/dg/sns-firehose-as-subscriber.htmlhttps://docs.aws.amazon.com/firehose/latest/dev/create-destination.html#create-destination-s3
Purpose: I created an SQS queue in hopes of storing bad data into a dead letter queue. My goal is to connect SQS to SNS, so that subscribers can get notified of the bad data stored in the dead letter queue.However, I understand now that data cannot be permanently stored through SQS because of its retention period. I also understand that SNS is known for PUSH not PULL, therefore it can't receive the SQS messages and email those messages to subscribersSo, my question is, what is a way to keep the messages from SQS permanently? My goal is to store those messages somewhere safely for future use. I also looked into storing the messages into S3.
HOW to keep AWS SQS messages permanently?
The proper syntax isdef data = readYaml file: "cars.yaml" def brand = data.cars[0].brandYou can find the definition (and examples) in the pipelineutility steps documentation.
I have a YAML file template which is set out out like so:cars: - brand: bmw fuel: petrol transmission: manual - brand: mercedes fuel: diesel transmission: auto - brand: audi fuel: electric transmission: semi-autoI am trying to write a function for a Jenkins pipeline that reads through the YAML file, picks up the brand, fuel and transmission value of each array set, and execute commands based on these variables for each set, I assume it would have to be some kind of loop.I have tried using readYaml however it doesn’t seem to pick up the first “brand” in this case and get errors, not sure if this YAML layout is ideal for my use caseI have tried:def cars = readYaml file = cars.yaml def brand = cars[brand]Expecting echo brand to return a list of brand names, which I can then use to look up the fuel and transmission values of that brand. However I get this errorgroovy.lang.MissingPropertyException: No such property: mkp for class: WorkflowScriptAny ideas?
How to loop through a YAML list file in Groovy/Jenkins
It isnot possibleto pass/return variables from child pipelines to parent pipelines. We can only pass parameters from parent pipeline to child pipeline.A work around to this problem is towrite the values into a file (txt) in the child pipeline and read this file from parent pipeline. Look at the following demonstration.Let's say you have a text file (with some data) in the storage account. In child pipeline, after thecopy and set variable activitycompletion, create a copy data activity.The source dataset would be the above text filewithoutfirst row as header. Here add additional columns, with your variable values.Select sink (without header). In mappings, import schema and delete the columns that are not required.Invoke this pipeline from parent pipeline, the values will be written as a text file (as shown in the below image)You can finally read this file in parent pipeline (using lookup) and utilize those values.
I have a child pipeline on Azure Data Factory which is called by a master one by an "Execute Pipeline" activity. This child pipeline has a couple of variables which I need in my master pipeline.On this child pipeline I even use some "Set variable" activities to change the variables values. Like in the example:I would like to pass this final values of my variables from child pipeline to the master one. If every step run successfully, my variables would have all "true" values at the end of my child pipeline. I would like this values to be passed to the master pipeline. Like in the example:Is it possible?
How to pass variable from a child pipeline to master pipeline?
To make pipeline dependent on each other you can create a trigger and make those triggers dependent on other triggers.To create dependent pipelines, we can useTumbling window triggerCreate triggers forPipeline to table A, B, CbyTrigger >> New/Edit >> Choose Trigger >> New >> Type-Tumbling window >> Configure Properties >>SaveTo makePipeline to table D, Edependent onPipeline to table A, B, CSelect Trigger > Advanced > New, then choose the trigger to rely on with the proper offset and size. This will create dependent on the trigger.ExampleIn above exampleTrigger 4is created forpipeline 4And in Dependencies added Trigger 1, 2, 3 which are created for pipeline 1, 2, 3 respectively.This dependentTumbling window triggerexecute only when another trigger inside the service has successfully been executed.
I have 5 pipelines on my Azure Data Factory, each pipeline copy data to a different table. There is a dependency on some of this tables, table D & E depend on table A, B & C. Like in this example:Table dependencies & PipelinesWhat I'm doing to refresh all data is the following executing sequence:Exec Pipeline to table A > Exec Pipeline to table B > Exec Pipeline to table C > Exec Pipeline to table D > Exec Pipeline to table E.Pipelines Execution OrderI could execute Pipeline to table E before Pipeline to table D with no problem, but none of them can be executed before Pipelines for table A, B & C.The idea I had to make this more organized and easier to schedule was change the pipeline D and add there 3 steps that will execute the Pipelines for table A, B & C. And on Pipeline to table E I added a step to execute the pipeline D. Like in the example:Pipelines IdeaHowever, this would create a kind of dependency on Table E with Table D, which I don't want. If I need for any reason to update JUST table E, it won't be able because I would need to update table D first.I wanted that both Pipelines to table D & E had a kind of validation if Pipelines to table A, B & C had run so then they can run.Is there a way to make this dependencies more organized?
Is there a way to create dependencies between pipelines? (A single pipeline depend on +3 others)
My solution to this is to place the envmodules in the config file:# config.yaml envmodules: stdenv: "StdEnv/2020" gcc: "gcc/9.3.0"Then you can callconfigfile: 'config.yaml' envmodules = config['envmodules'] rule test: input: text='catthis.txt' output: "test.txt" envmodules: envmodules['stdenv'], envmodules['gcc'] shell: "cat {input.text} > ./{output}"That will make it clearer at the rule which envmodules are expected while allowing you to keep track of versions in one place. I use something similar for containers.If you really want to keep the yaml separate, you can load multiple config files, though that's more challenging to ensure keys are not overwritten.
I have asnakefilelike this (only for dep:rule test: input: text='catthis.txt' output: "test.txt" envmodules: "modules.yaml" shell: "cat {input.text} > ./{output}"Mymodules.yamlfile contains this:modules: "StdEnv/2020", "gcc/9.3.0"So in the end, I'd like to have something like this, when snakemake is called:rule test: input: text='catthis.txt' output: "test.txt" envmodules: "StdEnv/2020", "gcc/9.3.0" shell: "cat {input.text} > ./{output}"Perhaps this is not possible, but I found nowhere on thesnakemake website herethat would allow this. But I'd be much more practical for me to have one file to call rather than pasting the modules to be loaded in all the rules (here I'm showing one, but imagine I have 50 rules...)When running the snakemake (assuming everything is in the same directory)snakemake -p --cores 1 --use-envmodulesit doesn't work (using themodules.yaml), but it does work if the modules are put directly in thesnakefile.Thecatthis.txtcontains only this textLorem ipsum dolor sit amet, again for demonstration.
How to use ".yaml" to load modules on cluster using "envmodules" and "--use-envmodules" in snakemake