Response
stringlengths
8
2k
Instruction
stringlengths
18
2k
Prompt
stringlengths
14
160
There is some code on GitHubPowerShell\Remotelywhich I believe does what you want.The code does this:First in call Invoke-Command as a job and waits for the job, like this:$testjob = Invoke-Command -Session $sessions -ScriptBlock $test -AsJob -ArgumentList $ArgumentList | Wait-JobThen it constructs a result object based on the job, giving you each of the stream (most of them at least.) This is the complicated code.The code more than likely would need to be refactored for your purposes as it is intended for testing.If you have the time, you should be able to refactor it to meet you needs. The code, as is, gives you the streams as properties of an object, but you should be able to pipe those objects to where you want them to go (you just have to remember to check each of them.)
I have a script that manages tasks across hundreds of virtual servers, it runs Invoke-Command in a job and outputs the details to a web page.My problem is errors thrown within the Invoke-Command scriptblock are not piped:This command correctly pipes the error (cannot connect to server) and outputs as a string:Invoke-Command [dead server] -ScriptBlock { Write-Error "Test" } *>&1 | Out-StringThis command seems to ignore the error completely, it is not displayed at all:Invoke-Command [live server] -ScriptBlock { Write-Error "Test" } *>&1 | Out-StringThis command correctly pipes the error out as a string:Invoke-Command [live server] -ScriptBlock { Write-Error "Test" *>&1 | Out-String }Using both the first and third examples I can pipe everything out, but it's not that simple. I will probably need to run complex scripts using this and it's unreasonable to expect me to redirect every single command so that errors are picked up.I can't even find anything to wrap it in to pipe everything out.Actually a function would work but it seems a very roundabout solution as there doesn't seem to be a way to convert a scriptblock to a function, so I'd have to put the scriptblock in an Invoke-Command in a Function in an Invoke-Command, which is in a job in a PSSession...
Redirect Errors From Invoke-Command
You are closing the pipes too early. Typically, you closefd2[0]before you use it indup2. And as you redirect FILENO_STDOUT before the second fork, the second filter has no longer access to the original stdout.Following code works:int main() { int fd1[2]; pipe(fd1); pid_t pid1; if ((pid1 = fork()) > 0) { char data[] = "Hello world!"; close(fd1[0]); // OK, will no longer be used write(fd1[1], data, sizeof(data)); close(fd1[1]); wait(NULL); exit(EXIT_SUCCESS); } else if (pid1 == 0) { int fd2[2]; pipe(fd2); pid_t pid2; close(fd1[1]); // OK, no used from here if ((pid2 = fork()) > 0) { dup2(fd1[0], STDIN_FILENO); // redirections for filter1 dup2(fd2[1], STDOUT_FILENO); close(fd1[0]); // close everything except stdin and stdout close(fd2[0]); close(fd2[1]); execl("./upcase", "upcase", NULL); perror("execl upcase"); exit(EXIT_FAILURE); } else if (pid2 == 0) { close(fd1[0]); // not used here dup2(fd2[0], STDIN_FILENO); // redirection for filter2 close(fd2[0]); close(fd2[1]); // close all what remains execl("./reverse", "reverse", NULL); perror("execl reverse"); exit(EXIT_FAILURE); } else { ...
I need to implement such IPC-schema:runtime data -> filter1 -> filter2 -> output. (same asdata | filter1 | filter2).I can pass data to first filter, but to second I can not (maybe because in first child stdout fd is not closed). How to properly implement such schema?P.S. filter1 and filter2 just read from stdin and write to stdout.My code:int main() { int fd1[2]; pipe(fd1); pid_t pid1; if ((pid1 = fork()) > 0) { char data[] = "Hello world!"; close(fd1[0]); write(fd1[1], data, sizeof(data)); close(fd1[1]); wait(NULL); exit(EXIT_SUCCESS); } else if (pid1 == 0) { int fd2[2]; pipe(fd2); pid_t pid2; dup2(fd1[0], STDIN_FILENO); dup2(fd2[1], STDOUT_FILENO); close(fd1[0]); close(fd1[1]); close(fd2[0]); close(fd2[1]); if ((pid2 = fork()) > 0) { execl("./upcase", "upcase", NULL); perror("execl"); exit(EXIT_FAILURE); } else if (pid2 == 0) { close(fd1[0]); close(fd1[1]); dup2(fd2[0], STDIN_FILENO); close(fd2[0]); close(fd2[1]); execl("./reverse", "reverse", NULL); perror("execl"); exit(EXIT_FAILURE); } else { perror("pid2"); exit(EXIT_FAILURE); } } else { perror("pid1"); exit(EXIT_FAILURE); } }
Multi-pipe does not work
This occurs because pipelines use subshells; by piping tocat, you've made the first if-then block execute in a subshell. Hence,VAR_0='Changed'is changed, but only in the subshell.Try:#!/bin/bash VAR_0='Unmodified' if true then VAR_0='Changed' echo "VAR_0: $VAR_0" fi | cat echo "VAR_0: $VAR_0"Redirecting to a file (">") doesnotcreate a subshell; thus that variable persists.
This question already has answers here:bash: piping output from a loop seems to change the scope within the loop - why?(2 answers)Closed7 years ago.I've read that you can redirect I/O into and out of various constructs in Bash (such as "if" and "while"). When I was trying this out, I noticed that using a "|" at the end of an if construct prevented any variables that I overwrote inside the if construct from taking effect; but if I use ">" instead, then the variable modifications take effect.#!/bin/bash VAR_0='Unmodified' if true then VAR_0='Changed' fi | cat echo "VAR_0: $VAR_0" ########### VAR_1='Unmodified' if true then VAR_1='Changed' fi > tmpFile rm tmpFile echo "VAR_1: $VAR_1"Running Bash version 4.3.11 on 64-bit Linux produces the following output:VAR_0: Unmodified VAR_1: ChangedNote the only difference is how I'm redirecting stdout from the if construct. Why is the "|" preventing VAR_0 from being changed?
Using '|' from Bash's if construct prevents variable being set [duplicate]
You can separate them into two commands:shuf -i 1-10 -n 6 | sort -n > a.txt && shuf -i 11-20 -n 6 | sort -r >> a.txt>>allows you to append to file&&to execute a second command in one line
I just want to create a file filled with half of increasing numbers followed by half of decreasing numbers using pipeline. So I could automate the process to generate thousand of files. Below is my code and generated files output.shuf -i 1-10 -n 6 | sort -n |shuf -i 11-20 -n 6|sort -r > a.txt20 18 17 13 12 11
Pipeline in linux command is not working.
You need to include a PROCESS block in your Invoke-Build function.function Invoke-Build { [CmdletBinding()] Param ( [Parameter(Mandatory = $True, ValueFromPipeline = $True, ValueFromPipelineByPropertyName = $True)] [string[]]$directories ) PROCESS { Write-Output $directories } }If you call the function like this:"dir1", "dir2", "dir3" | Invoke-BuildThe function will iterate over the directories one at a time.More Information on implementing pipeline support can be found here:http://learn-powershell.net/2013/05/07/tips-on-implementing-pipeline-support/
I have two functions one which outputs a set of directories And one needs to receive that set and do a foreach on it, however it seems the second function is only receiving one of the directories (the last one).What am I doing wrong.Get-Directories { return Get-ChildItem | Where-Object { $_.PSIsContainer -eq $True } } function Invoke-Build { [CmdletBinding()] Param( [Parameter(Mandatory=$True,ValueFromPipeline=$True,ValueFromPipelineByPropertyName=$True)] [string[]]$directories ) Write-Output $dir foreach ($dir in $directories) { Set-Location $dir Build Set-Location .. } Get-Directories | Invoke-BuildThe output though is just the last directory found by Get-Directories. I need the second function to accept array input as I plan to make it do things asynchronously.
Powershell: Piping values from one function to another
If I have understood your question correctly, thenfunction a(declara) { x = declara.x; y = declara.y; return {print:function(){ console.log(x+" , "+y); }}; //return an object whose one key-value has function inside of it. } a({x:1,y:2}).print();
I worked on pretty good number of functions like given belowfunction a(declara,callback) { x = declara.x; y = declara.y; return callback.call(this,[declara]); } a({x:1,y:2},function(){ console.log(x+" , "+y); });but I found thats not actually what callback does, could you please explain me, how can a piping structure be implemented as follows:a({x:1,y:2}).print()(Something similar to what jQuery does, also kindly explain me the same!)
Piping functions
Ok as I mentioned in my comment the issue seems to relate to the pipe characters so I had to evaluate the variable usingevaland escape the pipe character. So in order to ensure the functionproduceinterprets$@correctly I fed the command as follows. Note also that the variables are all now quoted####### produce() { local curfile="$1" #Remove the first element of the list of passed arguments shift if [ ! -e "${curfile}" ]; then #Run the subsequent command as shown in the list of passed arguments echo "$@" eval "$@ > ${curfile}" fi } produce outputfile.vcf samtools view -bq 20 input.bam \| samtools mpileup -Egu -t DP,SP -f hs37d5formatted.fa -\| bcftools call -cNv -
I have a functionproducewhich determines whether a file is present and if not it runs the following command. This works fine when the command output simply writes to stdout. However in the command below I pipe the output to a second command and then to a third command before it outputs to stdout. In this scenario I get the output writing to file correctly but it does not echo the preceding $@ from the produce function and the contents of the initial unpopulated outputfile.vcf (contains header columns) which is generated by the pipeline command on execution is also being outputted to stdout. Is there a more appropriate way to evaluate$@ > "${curfile}"produce() { local curfile=$1 #Remove the first element of the list of passed arguments shift if [ ! -e "${curfile}" ]; then #Run the subsequent command as shown in the list of passed arguments echo $@ $@ > "${curfile}" fi } produce outputfile.vcf samtools view -bq 20 input.bam | samtools mpileup -Egu -t DP,SP -f hs37d5formatted.fa -| bcftools call -cNv -
What is the best way to evaluate two variables representing a single pipeline command in bash?
You can useGroup-ObjectinsteadWhere-Object. In that case you will have two group: one for false condition and other for true.$SampleData=@' Bar,Foobar 1,A 1,B 1,C 1,D 1,E 2,A 2,B 3,A 3,B 3,C 4,D 5,A 5,B 5,E 6,A 7,A 7,C '@|ConvertFrom-Csv $SampleData| Group-Object Bar| # I am join both your conditions into one. Group-Object {($_.Count -eq 2) -and ( (($_.Group.Foobar -contains 'A') -and ($_.Group.Foobar -contains 'B')) -or (($_.Group.Foobar -contains 'A') -and ($_.Group.Foobar -contains 'C')) )}| # To have false before true. Sort-Object {$_.Values[0]}| # Expand groups. Select-Object @{Name='Condition';Expression={$_.Values[0]}} -ExpandProperty Group| # Display results to user. Format-Table -GroupBy Condition
I have a powershell project that I am working on that is reading data from a database using a function, then I am using the pipeline to filter that data to find what I want. One of the requirements is that I should display all irrelevant data as well as the data that the gets filtered.Get-Foo($Connection) | Group Bar| ? Count -eq 2 | ? {$_.Group.Foobar -contains "A" -and $_.Group.Foobar -contains "B" -or $_.Group.Foobar -contains "A" -and $_.Group.Foobar -contains "C"} |So basically I want to output all records that contain either only A and B or only A and C, But I also want to display all of the records that don't contain this.Sample Data: (A, B, C, D, E) (A, B) (A, B, C, D) (A, B, E) (A) (A, C) Current Output: (A, B) (A, C) Required Output (A, B, C, D, E) (A, B, C, D) (A, B, E) (A) - List all the false outcomes of filtering (A, B) - Then list the true (A, C)
Use pipeline to show filtered data and data that was filtered away
The code as written in C has two branches :a conditional branch when testing thewhileloop conditionan unconditional branch to jump back to the beginning of the loop after the loop bodyThe generated assembly reorders the code, in order to eliminate the unconditional branch, and only need the conditional branch.This elimination of a branch instruction results in the speedup that the video author mentions. Refer to the flow images shown in the video to see the difference between the two.The part about pipeline stalls and branch prediction is unrelated to this point, and instead talks about another thing to be aware of : every conditional branch can potentially lead to pipeline stalls, so limiting the amount of conditional branches can be advantageous.
I'm a newbie to embedded systems. I'm trying to learn and improve as much as I can. I'm stuck at a point and it's probably OK to just ignore and move on. However, I can't help but let it tickle my mind. I feel like I have to understand this.On my way, one of the best resources I was able to find is this YouTube playlist by Quantum Leaps, LLC:Embedded Systems Programming CourseInLesson 2after the time 6.12, Mr. Samek explains some stuff regarding pipeline stalls. At the first glance, I didn't understand a word of it. Then I did some research and readings about pipelines, bubbles, branches etc. to get familiar with concepts and to understand basic mechanisms behind them. I then watched the video again but I still don't understand how the second case is faster.Edit to make the question self-contained as much as possible:In the video, the code is written as follows:int main() { int counter = 0; while (counter < 20) { ++counter; } return 0; }After the compilation code gets this structure:... ++counter; // Let's call this line "Line X" while (counter < 20) { // Assembly instructions to go back to Line X } ...If I'm not mistaken, Mr. Samek says second code is faster since it avoids a branch. But I don't understand how.Any help (and advice to learn & improve) is appreciated, thanks in advance!
Explanation for this specific case of pipeline stall
As per @Gerolds suggestion above, theCategorized Jobs Viewplugin is able to aggregate a number of select jobs and therefore provide a single-page view on multiple pipelines (see attached).
We have many build pipelines running in Jenkins, each for separate projects and with multiple jobs (test, integration, deploy, quality, performance, release etc).I am looking for some sort of radiator that will provide an aggregate single-page view of all pipelines, indicating if any have single jobs broken.However, having looked around I can't find anything suitable. Has anyone seen anything similar at all? Would appreciate some pointers before attempting to build one...
Jenkins CI pipeline radiator
From the link you gave, I found that number of instructions that can be completed in 5 cycles is 4.So average execution time of pipelined processor = 2/4 =0.5 ns.So the speed up is 1.6/0.5 = 3.2 :)
Consider a non-pipelined processor with a clock rate of 2.5 gigahertz and average cycles per instruction of four. The same processor is upgraded to a pipelined processor with five stages; but due to the internal pipeline delay, the clock speed is reduced to 2 gigahertz. Assume that there are no stalls in the pipeline. The speedup achieved in this pipelined processor is_______________.My SolutionSpeed up = Old execution time/New execution timeOld execution time = CPI/2.5 = 4/2.5 = 1.6 nsWith pipelining, each instruction needs old execution time * old frequency/new frequency (without pipelining) = 1.6 * 2.5 / 2 = 2 nsThere are 5 stages and when there is no pipeline stall, this can give a speed up of up to 5 (happens when all stages take same number of cycles). So, average execution time = 2 / 5 = 0.4 nsSo, speed up compared to non-pipelined version = 1.6 / 0.4 = 4Ref: Q: 12.10http://faculty.washington.edu/lcrum/Archives/TCSS372AF06/HW8.docIs this solution correct? The answer to this is given as 3.2
Speed up with pipelining
You can useDocker volumes.You start your container asdocker run -v /host/path:/container/path ...and then you can pipe data to files in/host/pathand they will be visible in/container/pathand vise versa.
I would like to have custom server which listens inside docker's container (e.g on TCP 192.168.0.1:4000). How to send data in and out from outside of the container. I don't want to use host ports for bridging. I would rather use pipelines or something which not take host network resources. Please show me full example of docker command.
How to connect docker's container with pipeline
You are right.The first chart has two instructions being fetched during the second cycle. Unless specified otherwise, this cannot be done.There are circumstances in which this is allowable:The instruction fetch is divided into two stages,IF1andIF2, each of which take 1 cycle.IF1andIF2can be overlapped.The data path and instruction cache support 2 simultaneous operations.
I'm studying CPU pipelining, and had a trouble.I want to know which one is right pipelining in below pictureIn my opinion, the first Gantt chart is kinda "structural hazard" becuase "IF" stage is partially overlapped. I think that using one stage for two instruction is not allowed. So I think that second one is right....Am i right?
Which one is right one in pipelining?
It is completely correct to do so, we do that in pitivi for example :https://git.gnome.org/browse/pitivi/tree/pitivi/timeline/previewers.py#n965Ideally, it is best to make sure no errors happen though :)
HiI want to know if it is correct create a pipeline within a callback I give you an example of what I have implemented:I have a callback that is responsible for receiving error messages. When I get an error that's the problem I launch a new pipeline to the problem fixed. This works but not if it is correct implementation or can give any problems laterThanks!
create a pipeline in callback funtion gstreamer
That'll work. You've just got some syntax issues with your function definitions and how you're passing the parameters:Function Get-Square{ [CmdletBinding()] Param([Parameter(ValueFromPipeline=$true)]$x) $x*$x } Function Get-Cube{ [CmdletBinding()] Param([Parameter(ValueFromPipeline=$true)]$x) $x*$x*$x } Function Get-Result{ [CmdletBinding()] Param([Parameter(ValueFromPipeline=$true)]$x,$Cmdlet) $x | . $cmdlet } 10 | Get-Result -Cmdlet Get-Square 10 | Get-Result -Cmdlet Get-Cube 100 1000
In .NET if you have a subroutine whose implementation might change from one call to another, you can pass a delegate to the method that uses the subroutine.You can also do this in Powershell. You can also use scriptblocks whichhave been described as Powershell's equivalent of anonymous functions. Idiomatic powershell, however, makes use of powershell's pipeline parameter bindings. But neither delegates nor scriptblocks seem to make use of Powershell's pipeline parameter bindings.Is there a (idiomatic) way to pass a powershell commandlet to another commandlet in a way that preserves support for pipeline parameter bindings?Here is a code snippet of what I'd like to be able to do:Function Get-Square{ [CmdletBinding()] Param([Parameter(ValueFromPipeline=$true)]$x) PROCESS{$x*$x} } Function Get-Cube{ [CmdletBinding()] Param([Parameter(ValueFromPipeline=$true)]$x) PROCESS{$x*$x*$x} } Function Get-Result{ [CmdletBinding()] Param([Parameter(ValueFromPipeline=$true)]$x,$Cmdlet) PROCESS{$x | $Cmdlet} } 10 | Get-Result -Cmdlet {Get-Square} 10 | Get-Result -Cmdlet {Get-Cube}
Is there a way to create a Cmdlet "delegate" that supports pipeline parameter binding?
As far as I can tell, as of spray-client 1.3.1 there is no way to customise the pipe after it has been created. However, you can create custom pipes for different types of requests.It's worth mentioning the fact that the timeouts defined below are the timeouts for the ask() calls, not for the network operations, but I guess this is what you need from your description.I found the following article very useful in understanding a bit better how the library works behind the scenes:http://kamon.io/teamblog/2014/11/02/understanding-spray-client-timeout-settings/Disclaimer: I haven't actually tried this, but I guess it should work:val timeout1 = Timeout(5 minutes) val timeout2 = Timeout(1 minutes) val pipeline1: HttpRequest => Future[HttpResponse] = sendReceive(implicitly[ActorRefFactory], implicitly[ExecutionContext], timeout1) val pipeline2: HttpRequest => Future[HttpResponse] = sendReceive(implicitly[ActorRefFactory], implicitly[ExecutionContext], timeout2)and then you obviously use the appropriate pipe for each request
I currently have a REST call set up using spray pipeline. If I don't get a response within x number of seconds, I want it to timeout, but only on that particular call. When making a spray client pipeline request, is there a good way to specify a timeout specific to that particular call?
Spray/Scala - Setting timeout on specific request
There is a way to pause D's build until A finishes. You will have to install:"Parametrized Trigger Plugin". Once installed, go to D's configuration. Under Build, Add build step, select"Trigger/Call builds on other projects".After that put"A"in"Projects to build"and then check"Block until the triggered projects finish their builds". Save the configuration.This should fulfill your requirement.
We have maven projects on git with structure of-- pro-A -- pro-B -- pro-C pro-D -- pro-EThese are all project with their own repo in git and their own build-pipeline in jenkins with stages as followsbuild -- deploy to TEST -- run tests -- (manual tigger) deploy to QAevery build gets deployed to maven repo with jenkins build number appended to it and merge to release branch from master and tag with the new version number: e.g. 1.0.9-649So, pro-A is parent of all projects, pro-B only depends on pro-A, pro-C and pro-D are at the same level they don't depend on each other but have dependency on pro-B, pro-E have dependency on all others pro-A,B,C,DWhen a change is pushed gitlab triggers a build for the respective project. Now the problem is that when, say A and D changes and Ds build is triggered before a there is a good chance that the build fails as it depends on the newer code of A.My question is, is it possible to pause the Ds build until A finishes building?I was thinking something like in pre-step of D try to see if latest commit has a later timestamp than the release branch then trigger a build on parent, but don't know how?
Building/Waiting for parent job Latest version
It would beifStage.pc = (ifStage.pc & 0xF0000000) | (idStage.instruction.imm << 2);that is: take current PC, apply an AND mask to keep bits 28..31, and then apply an OR mask with the immediate shifter 2 places left. This assumes idStage.instruction.imm is a 26 bit immediate. If it can hold "garbage" in the high order bits (26..31) then you would apply another AND mask:ifStage.pc = (ifStage.pc & 0xF000) | ((idStage.instruction.imm & 0x3FFFFFF) << 2);
I am trying to implement the jump instruction to my Pipeline Simulator in theClanguage. I have been reading up on the J-Instruction for mips and seen its constructed by 26 bits imm and 6 bits opcode. After some further reading I found that the address to the jump instruction can be calculated using thisPC <- PC31-28::IR25-0::00I am not completely sure how I would implement this in theClanguage though. While calculating the ex stage I have been trying something like the code below but it doesn't work for me.if(idStage.instruction.type == J) { ifStage.pc = ifStage << idStage.instruction.imm; ifStage.pc = IfStage << 2; }How could I implement the PC <- PC31-28::IR25-0::00 in theClanguage?
Implementation of jump sequence
If the instruction set documentation lists fixed numbers of clocks per instruction or instruction variation, then it is a pretty safe bet that no it it is not pipelined like we would think of a modern processor.Remember that the word performance is relative, trading power and real estate for operations per second. Nothing comes for free. In the microcontroller world you might be more interested in ops per watt and price per unit than ops per second. Even with what we might call inefficient instruction sets or implementations, we still turn down the clock rates on the parts to save power, that is how much performance we have to spare...It might be a good educational exercise to implement a pipelined clone if you are really interested in one...
Does 68HC11 have Pipeline technique for improving the performance of the Integrated Circuit? Is 68HC11 use pipelined? I didnt find useful information Thank you
Is 68HC11 pipelined?
No you cannot. Fromhttp://sdn.sitecore.net/Articles/Media/Prevent%20Files%20from%20Uploading.aspx:The uiUpload pipeline is run not as part of the Sheer event, but as part of the form loading process in response to a post back. This is because the uploaded files are only available during the "real" post back, and not during a Sheer UI event. In this sense, the uiUpload pipeline has not been designed to provide UI.That page was written for v5.1 and 5.2, but I'm pretty sure it still applies. The page claims that you can emit javascript to the page like Ahmed suggested, but it didn't work when I tried it.
I'm currently trying to show aSheerResponse.YesNoCancel()dialog within the Save uiUpload pipeline process from Sitecore. The problem appears when I do that call and it throws aNullException. I thought it was weird so I started copying the code from Sitecore's DLL and adding it to my solution. After that, I found that if the propertyOutputEnableis false it returns aClientCommandthat isNULLand when it tries to add a control to it, the Exception appears. So Fixing that I was able to finish the execution of that method. Anyway I still can't show the dialog. So the question is: Can I show a Dialog from a Sitecore uiUpload pipeline?
Open a dialog from a Sitecore uiUpload Pipeline process
This is implemented in Mapreduce 0.4 and up.
There used to be dashboard of all jobs in appengine-mapreduce library at/_ah/mapreduce/statusURL. If I usecom.google.appengine.tools:appengine-mapreduce:0.2and navigate to the/mapreduce/statuspage I getRuntimeException: Not Implemented. Has been the dashboard moved toappengine-pipelineproject or is it simply dropped?If there isn't any dashboard out of box what is the best way to create similar one myself?
Is there any replacement of /_ah/mapreduce/status dashboard in new appengine-mapreduce library
As pointed out in comments, you probaly want this.#!/bin/bash from_directory="first_directory" to_directory="second_directory" rsync --archive $from_directory $to_directory; ls -R $to_directory/$from_directoryAnd if$from_directoryand$to_directoryare both absolute paths,$to_directory/$from_directorydoes not make sense. Might as well just dols -R $to_directory.
I want to write a easy script shell like that:#!/bin/bash from_directory="first_directory" to_directory="second_directory" rsync --archive $(from_directory) $(to_directory) | ls -R $(to_directory)/$(from_directory)ORcp -r $(from_directory) $(to_directory) | ls -R $(to_directory)/$(from_directory)I have this error ==> ls: impossible to reach in / home / jilambo / week2 / shooter_game: no file or directory of this type.In the second time, it's ok because the first_directory have been copied to the segond directory.Thanks.
Write a bash script that lists all files and subdirectories
It is very straight forward, but rather than giving you the code, I suggest you go and read a bit on the topic, try this one:http://www.jonobacon.org/2006/08/28/getting-started-with-gstreamer-with-python/The srcpadname for h264parse is 'src', not 'src0' and that is why it returns NoneType. 'src0' is usually only used when you have an element with request-pads (like the Tee) but this is not the case for h264parse.Feel free to post a more complete code-attempt if you still can not make it work.
How do I need to link this gstreamer pipeline in python code? (Not by using gst.launch()! )filesrc ! h264parse ! avimux ! filesinkWhen I try to create pad object -h264parse.get_pad('src0')it returns NoneType. I am also attaching bufferprobe to this pad.
What is the proper way to link this gstreamer pipeline?
for fixed size messages you would add the FixedLengthFrameDecoder in front of your business handler.See:http://netty.io/3.6/api/org/jboss/netty/handler/codec/frame/FixedLengthFrameDecoder.html
In my current scenario, I have an existing non-netty client that is sending a fixed message size (32 * 1024 bytes) to my existing non-netty server. I am in the process of changing my server to use Netty, I am unclear on the handlers I need to add to my pipeline before my business logic handler. If I am going to be using SSL, then I will add the SSL handler first in the pipeline and with my business logic handler being last. So what handlers do I need in the middle? Do I need a set size FrameDecoder (if that exists)? The message is not delimited by any characters, so I don't think I need to use DelimiterBasedFrameDecoder. Nor will I need to use a StringDecoder or StringEncoder.… … pipeline.addLast("ssl", new SslHandler(engine)); // Anything to add here for fixed sized byte[] messages?????? // and finally add business logic handler pipeline.addLast("handler", new BusinessLogicHandler()); … …For the bootstrap I have set the following options:this.bootstrap.setOption("keepAlive", true); this.bootstrap.setOption("sendBufferSize", 32*1024); this.bootstrap.setOption("receiveBufferSize", 32*1024); this.bootstrap.setOption("tcpNoDelay", true);Do I need to set the writeBufferHighWaterMark option too?Thank you
Pipeline setup for handling fixed sized messages in Netty
In yourwhile {}loop, you are calling stdin.close(). The first time through the loop, the stream is retrieved from the Process and happens to be open. On the first iteration of the loop, the stream is retrieved from the process, written to, flushed, and closed(!). Subsequent iterations of the loop then get the same stream from the process, but it was closed on the first iteration of the loop (!), and your program throws anIOException.
I try to communicate with the process by using this way:Process process = Runtime.getRuntime().exec("/home/username/Desktop/mosesdecoder/bin/moses -f /home/username/Desktop/mosesdecoder/model/moses.ini"); while (true) { OutputStream stdin = null; InputStream stderr = null; InputStream stdout = null; stdin = process.getOutputStream(); stderr = process.getErrorStream(); stdout = process.getInputStream(); // "write" the parms into stdin line = "i love you" + "\n"; stdin.write(line.getBytes()); stdin.flush(); stdin.close(); // Print out the output BufferedReader brCleanUp = new BufferedReader(new InputStreamReader(stdout)); while ((line = brCleanUp.readLine()) != null) { System.out.println("[Stdout] " + line); } brCleanUp.close(); }This works fine. However, I am stuck with a problem when I write the pipeline more than one time. That is - I can write to the Outputstream pipeline more than one time. The error is (for the 2th iteration):Exception in thread "main" java.io.IOException: **Stream Closed** at java.io.FileOutputStream.writeBytes(Native Method) at java.io.FileOutputStream.write(FileOutputStream.java:297) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.**flush(BufferedOutputStream.java**:140) at moses.MOSES.main(MOSES.java:60)So, is there any way to fix this problem?
Writing OutputStream Pipleline multiple times error - java
You need to specify what Python "Web server" you're using (e.g. bottle? Maybe Tornado? CherryPy?), but more important, you need to supply what request headers and what HTTP response go in and out when IE9 is involved.You may lift them off the wire using e.g. ngrep, or I think you can use Developers Tools in IE9 (F12 key).The most common quirks with IE9 that often do not bother Web browsers are mismatches in Content-Length (well, this DID bother Safari last time I looked), possibly Content-Type (this acts in reverse - IE9 sometimes correctly gleans HTML mimetype even if the Content-Type is wrong), Connection: Close.So yes, it could be a problem with HTTP pipelining: specifically if you pipeline a request with invalid Content-Length and not even chunked-transfer-encoding, IE might wait for the request to "finish". This would happen in Web browsers too; but it could then be that this behavior, in IE, overrides the connection being flushed and closed, while in Web browsers it does not. These two hypotheses might match your observed symptoms.To fix that, you either switch to chunked transfer encoding, which replaces Content-Length in a way, or correctly compute its value. How to do this depends on the server.To verify quickly, you could issue a Content-Length surely too short (e.g. 100 bytes?) to see whether this results in IE un-hanging and displaying a partial web page.
Trying to debug a website in IE9. I am running it via Python.In chrome, safari, firefox, and opera, the site loads immediately, but in IE9 it seems to hang and never actually loads.Could this possibly be an issue with http pipelining? Or something else? And how might I fix this?
IE9 and Python issues?
I did this hack to get things moving but if someone can improve or hint at a better solution please share it.Loading my items in the spider like this:items = [item1.load_item(), item2.load_item(), item3.load_item()]I then defined a function outside the spider:def rePackIt(items): rePackage = rePackageItems() rePack = {} for item in items: rePack.update(dict(item)) for key, value in rePack.items(): rePackage.fields[key] = value return rePackageWhere in theitems.pyI added:class rePackageItems(Item): """Repackage the items""" passAfter the spider is done crawling the page and loading items I yield:yield rePackIt(items)which takes me to thepipelines.py.In theprocess_itemto unpack the item I did the following:def process_item(self, item, spider): items = item.fieldsitems is now a dictionary that contains all the extracted fields from the spider which I then used to insert into a single database table
To keep things organized I determined there are three item classes that a spider will populate.Each item class has a variety of fields that are populated.class item_01(Item): item1 = Field() item2 = Field() item3 = Field() class item_02(Item): item4 = Field() item5 = Field() class item_03(Item): item6 = Field() item7 = Field() item8 = Field()There are multiple pages to crawl with the same items. In the spider I use XPathItemLoader to populate the 'containers'.The goal is to pass the items to a mysql pipeline to populate a single table. But here is the problem.When I yield the three containers (per page) they are passed as such into the pipeline, as three separate containers. They go through the pipeline as their own BaseItem and populate only their section of the mysql table, leaving the other columns 'NULL'.What I would like to do is repackage these three containers into a single BaseItem so that they are passed into the pipeline as a single ITEM.Does anyone have any suggestions as to repackage the items? Either in the spider or pipeline?Thanks
Repackage Scrapy Spider Items
The UglifyFilter renames the files it processes to add a.min.jssuffix.jquery.jsgoes into uglify,jquery.min.jscomes out.I would probably put the UglifyFilter last, like this:match "**/*.js" do concat ["jquery.js", "libs/ufvalidator.js"], "application.js" uglify endbut you could also change the filenames you're matching with the ConcatFilter:match "**/*.js" do uglify concat ["jquery.min.js", "libs/ufvalidator.min.js"], "application.js" endor tell the UglifyFilter not to rename the files:match "**/*.js" do uglify { |input| input } concat ["jquery.js", "libs/ufvalidator.js"], "application.js" end
Consider the following folder full of javascripts I want to compile into a single one usingrake-pipeline:jquery.jssome-jquery-plugin.jsyet-another-jquery-plugin.jslibrary-1.coffeelibrary-2.coffeelibrary-1andlibrary-2depend on jQuery.Assetfilerequire "rake-pipeline-web-filters" output "dist" input "js" do match "**/*.coffee" do filter Rake::Pipeline::Web::Filters::CoffeeScriptFilter end match "**/*.js" do filter Rake::Pipeline::Web::Filters::UglifyFilter filter Rake::Pipeline::OrderingConcatFilter, ["jquery.js", "libs/ufvalidator.js"], "application.js" end endI have noticed that when concatenating everything, the scripts written in Coffeescript are at the top. Won'tOrderingConcatFilterprevent this from happening? What should I fix so that the Coffeescript source is after jQuery?Thank you.
rake-pipeline: How to append coffeescript files after jquery?
Make a grep command that will take uncut lines and filter by 13th field, likegrep -E '(\S+\s+){12}A\s'and then pipe it tocut -f 6and so on.
I wrote the following pipeline:for i in `ls c*.txt | sort -V`; do echo $i; grep -v '#' ${i%???}_c_new.txt | grep -v 'seq-name' | cut -f 6 | grep -o '[0-9]*' | awk '{s+=$1} END {print s}'; doneNow, I want to take 6th column (cut -f 6 and later code) of only those lines, which match certain grep in 13th column.These:cut -f 13 | grep -o '^A$'So that I look at 13th column and if grep matches, then I take this line and make rest of the code - counting sum of numbers in 6th column.Please, how can I do such a thing? Thanks.
How to filter pipeline data according to column?
The gist is this is very simple withcsv.DictWriter:>>> inputs = [{ ... "author": ["TIM ROCK"], ... "book_name": ["Truk Lagoon, Pohnpei &amp; Kosrae Dive Guide"], ... "category": "Travel", ... }, ... { ... "author": ["JOY"], ... "book_name": ["PARSER"], ... "category": "Accomp", ... } ... ] >>> >>> from csv import DictWriter >>> from cStringIO import StringIO >>> >>> buf=StringIO() >>> c=DictWriter(buf, fieldnames=['author', 'book_name', 'category']) >>> c.writeheader() >>> c.writerows(inputs) >>> print buf.getvalue() author,book_name,category ['TIM ROCK'],"['Truk Lagoon, Pohnpei &amp; Kosrae Dive Guide']",Travel ['JOY'],['PARSER'],AccompIt would be better to join those arrays on something, but since elements can be a list or astring, it's a bit tricky. Telling if something is a string or some-other-iterable is one of the few cases in Python where direct type-checking makes good sense.>>> for row in inputs: ... for k, v in row.iteritems(): ... if not isinstance(v, basestring): ... try: ... row[k] = ', '.join(v) ... except TypeError: ... pass ... c.writerow(row) ... >>> print buf.getvalue() author,book_name,category TIM ROCK,"Truk Lagoon, Pohnpei &amp; Kosrae Dive Guide",Travel JOY,PARSER,Accomp
I had items that scraped from a site which i placed them in to json files like below{ "author": ["TIM ROCK"], "book_name": ["Truk Lagoon, Pohnpei &amp; Kosrae Dive Guide"], "category": "Travel", } { "author": ["JOY"], "book_name": ["PARSER"], "category": "Accomp", }I want to store them in csv file with one dictionary per one row in which one item per one column as below| author | book_name | category | | TIM ROCK | Truk Lagoon ... | Travel | | JOY | PARSER | Accomp |i am getting the items of one dictionary in one row but with all the columns combinedMypipeline.pycode isimport csvclass Blurb2Pipeline(object): def __init__(self): self.brandCategoryCsv = csv.writer(open('blurb.csv', 'wb')) self.brandCategoryCsv.writerow(['book_name', 'author','category']) def process_item(self, item, spider): self.brandCategoryCsv.writerow([item['book_name'].encode('utf-8'), item['author'].encode('utf-8'), item['category'].encode('utf-8'), ]) return item
Arranging one items per one column in a row of csv file in scrapy python
try this:param( [Parameter(Mandatory = $true,ValueFromPipeline = $true)] $Process ) process{ New-Object PSObject -Property @{ Name = $Process.Processname} }Edit:if you need a function:function Get-MoreInfo { param( [Parameter(Mandatory = $true,ValueFromPipeline = $true)] $Process ) process{ New-Object PSObject -Property @{ Name = $Process.Processname} } }then you can use:. .\get-moreinfo.ps1 # Get-Process | Get-MoreInfoEdit after Comment:Read about dot sourcing a script
I have an issue running the following script in a pipeline:Get-Process | Get-MoreInfo.ps1The issue is that only the last process of the collection is being displayed. How do I work with all members of the collection in the following script:param( [Parameter(Mandatory = $true,ValueFromPipeline = $true)] $Process ) function Get-Stats($Process) { New-Object PSObject -Property @{ Name = $Process.Processname } } Get-Stats($Process)
Powershell pipelining only returns last member of collection
Bit of a stab in the dark, but when I upgraded to rc6 today, this broke in an initializer:if RAILS_ENV == 'production'and was fixed with this:if Rails.env.production?Don't know if that's got anything to do with it.
I'm having a Rails 3.1 rc6 app on Heroku's cedar stack (ruby 1.9.2).I precompile assets using rake assets:precompile RAILS_ENV=production locally on my development machine.The problem is the generated md5 fingerprints in the precompiled filenames don't match the ones generated by the rails helpers (like asset_path) in production on Heroku.Does anyone have a clue why this is? How can I fix it? I can't precompile on Heroku, as they have a read only filesystem.
Rails 3.1 Asset Pipeline: Precompiled MD5 Fingerprints don't match
I think I know what this is, but I've only seen this in iOS 5. Due to the NDA I can't give details though on here. Post it in the Apple Developer Forum and i'll answer your post once you post the link.
I am trying to integrate twitter into an iOS app using Ben Gottlieb'sTwitter-OAuth-iPhonelibrary. When I run this on the simulator it works fine. However, when I run it on an iPhone it does not work. I get the following messages in debugger:> PIPELINE: logging URLS that fail to support pipelining heuristics > PIPELINE: Heuristics failed for: twitter.comAny ideas what is going on here, and how I can get this working?
PIPELINE: logging URLS that fail to support pipelining heuristics
You want to subdivide the image with a space-filling-curve. A sfc recursivley subivide the surface in smaller tiles and because of it's fractal dimension reduce the 2d complexity to a 1d complexity. When you have subdivide the image you can you use this curve to continously scan the image. Or you can use a BFS and some sort of an image-low-frequency-detail filter to continuously scan higher resolution of your image. You want to look for Nick's spatial index hilbert curve quadtree blog but I don't think you can put the tiles together with a jpg format (cat?). Or you can continously reduce the resolution?scanimage --resolution [1-150] --mode Color | convert - jpg:-
I have a custom made web server running that I use for scanning documents. To activate the scanner and load the image on screen, I have a scan button that links to a page with the following image tag:<img src="http://myserver/archive/location/name.jpg?scan" />When the server receives the request for a ?scan file it streams the output of the following command, and writes it to disk on the requested location.scanimage --resolution 150 --mode Color | convert - jpg:-This works well and I am happy with this simple setup. The problem is that convert (ImageMagick) buffers the output of scanimage, and spits out the jpeg image only when the scan is complete. The result of this is that the webpage is loading for a long time with the risk of timeouts. It also keeps me from seeing the image as it is scanned, which should otherwise be possible because it is exactly how baseline encoded jpeg images show up on slow connections.My question is: is it possible to do jpeg encoding without buffering the image, or is the operation inherently global? If it is possible, what tools could I use? One thought I had is separately encoding strips of eight lines, but I do not know how to put these chunks together. If it is not possible, is there another compression format that does allow this sort of pipeline encoding? My only restriction is that the format should be supported by the mainstream browsers.Thanks!
pipeline image compression
Your cmdlet should have a well-defined syntax, based on what you put in the Cmdlet attribute.For instance, here's the start of where I create my own clear-host cmdlet to replace the built-in clear-host function:<Cmdlet("clear", "host")> _ Public Class Clearhost Inherits CmdletFrom the Cmdlet attribute, the syntax for my cmdlet is "clear-host". You should be able to use that (since it's a string) and add it to the pipeline.
I'd like to programmatically assemble and run apipeline containing my own PSCmdlet. However, the Pipeline class only allows to add strings and Commands (which are constructed from strings in turn).var runspace = ...; var pipeline = runspace.CreatePipeline(); pipeline.AddCommand("Get-Date"); // ok var myCmdlet = new MyCmdlet(); pipeline.AddCommand(myCmdlet); // Doesn't compile - am I fundamentally // misunderstanding some difference between commands and commandlets? foreach(var res in pipeline.Invoke()) {...}I believe that what I'm doing should basically make sense... or is there a different way to do this?
How to programmatically add a PSCmdlet to a Powershell pipeline?
I don't think it is a good idea. The amount of issues would depend on some factors like...listRespondants will be rooted and hence will have application lifetime. If there are a bunch of items that get added, the memory footprint would keep on increasing. So, it would rather come down to the number of items in this list.The following can be a show stopper...IISReset or Application Domain recycle will remove all this information from your application. How are you planning to bring the items back in this list? Database?What if you have a Web farm. This application will not work as expected the moment you try to scale out. The reason being... even if you have the same module loaded on all the servers in the web farm the data in Worker Process is local. Hence the listRespondants would be different in all your servers unless you are loading it from some database.
I've declared an event on an HTTP Module so it will poll subscribers for a true/false value to determine if it should go ahead with its task of tweaking the HTTP Response. If only one subscriber answers true then it runs its logic.Does this make sense?Are there potential pitfalls I'm not seeing?public class ResponseTweaker : IHttpModule { // to be a list of subscribers List<Func<HttpApplication, bool>> listRespondants = new List<Func<HttpApplication, bool>>(); // event that stores its subscribers in a collection public event Func<HttpApplication, bool> RequestConfirmation { add { listRespondants.Add(value); } remove { listRespondants.Remove(value); } } public void Init(HttpApplication context) { if (OnGetAnswer(context)) // poll subscribers ... // Conditionally Run Module logic to tweak Response ... } /* Method that polls subscribers and returns 'true' * if only one of them answers yes. */ bool OnGetAnswer(HttpApplication app) { foreach (var respondant in listRespondants) if (respondant(app)) return true; return false; } // etc... }
Okay to implement a .NET event on an IHttpModule?
The producer/consumer model is a good way to proceed. And Microsoft has their newParallel Extensionswhich should provide most of the ground work for you. Look into theTaskobject. There's a preview release available for .NET 3.5 / VS2008.Your first task should read blocks of data from your stream and then pass them onto other tasks. Then, have as many tasks in the middle as logically fit. Smaller tasks are (generally) better. The only thing you need to watch out for is to make sure the last task saves the data in the order it was read (because all the tasks in the middle may finish in a different order to what they started).
I have an appliction right now that is a pipeline design. In one the first stage it reads some data and files into a Stream. There are some intermediate stages that do stuff to the stream of data. And then there is a final stage that writes the stream out to somewhere. This all happens serially, one stage completes and then hands off to the next stage.This all has been working just great, but now the amount of data is starting to get quite a bit larger (hundreds of GB potentially). So I'm thinking that I will need to do something to alleviate this. My initial thought is what I'm looking for some feedback on (being an independent developer I just don't have anywhere to bounce the idea off of).I'm thinking of creating a Parallel pipeline. The Object that starts off the pipeline would create all of the stages and kick each one off in it's own thread. When the first stage gets the stream to some certain size then it will pass that stream off to the next stage for processing and start up a new stream of its own to continue to fill up. The idea here being that the final stage will be closing out streams as the first stage is building a new ones so my memory usage would be kept lower.So questions: 1) Any high level thoughts on directions for this design? 2) Is there a simpler approach that you can think of that might apply here? 3) Is there anything existing out there that does something like this that I could reuse (not a product I have to buy)?Thanks,MikeD
C# Stream Design Question
You can use non-blocking sockets if you like. This involves a bit of coding to switch to them as you will need to kick curl aside. But this may improve performance because you will really be able to perform requests simultaneously.Seesocket_set_blocking/stream_set_blockingfunctions.
I have a PHP client application that is interfacing with a RESTful server. Each PHP Goat instance on the client needs to initialize itself based on information in a /goat request on the server (e.g. /goat/35, /goat/36, etc.). It does this by sending an HTTP request to its corresponding URL via cURL. Working with 30+ goat objects per page load equates to 30+ HTTP requests, and each one takes 0.25 second - that's baaaad, as my goats would say. Lazy-loading and caching the responses in memory helps, but not enough.foreach ($goats as $goat) { $goat->getName() // goat needs to hit the REST API }The advantage of this technique is that my goats are all smart and encapsulated. The disadvantage is that the performance is horrible. The goats don't know how to queue their HTTP requests, one goat doesn't know if there are other goats that need to initiate a request, etc. I guess one alternative would be to build the goats externally:$urls = array('http://', 'http://', ...); // array of goat URLs $result = fancy_pipelined_http_request_queue($urls); foreach ($result as $xml) { $goat->buildSelfFromXML($xml); }I'm sure this is a well-known OO/REST dilemma that there are more advanced ways of solving, I just don't know where to look. Any ideas?
Non-blocking HTTP requests in object-oriented PHP?
You can use the ApacheKeepAlive Offdirective to prevent Apache from leaving connections open after a request.If you're stuck on an older 1.3 version of Apache, it'sKeepAlive 0.Update:The "Context" info on both these links reads "Server Config." According tothis page, that means you can't use them in.htaccessfiles.If you can only change settings from.htaccess, I think you will need to use theSetEnvdirective to set thenokeepaliveenvironment variable. The examples on that page also show how to set variables for only specific files/paths or specific browsers by user-agent.
I've noticed that my pageview counts are being messed up by the pipe-lining feature in Firefox. People who visit using Firefox with pipelining enabled count two (or more) times for each visit.What is the best way to detect or block these duplicate requests? I need to know how to block it on my server, not just how to disable pipelining in Firefox.I'm using PHP and Apache.
Best way to stop Firefox pipe-lining from messing up my stats
You should recursively walk over your data structure and look for a value "AzureKeyVault@2".In this example I useruamel.yamlas it support the YAML 1.2 spec (from 2009) and supports more than PyYAML even when just parsing YAML 1.1 documents. You should be able to do the same with PyYAML.As there could be multiple keys 'AzureKeyVault@2' I append all such keys found to a list, you can then decice to throw an error, take the first item, or some other action.import sys from pathlib import Path import ruamel.yaml def find_task(d, name, result): if isinstance(d, dict): if name in d.values(): result.append(d) else: for k, v in d.items(): find_task(v, name, result) elif isinstance(d, list): for item in d: find_task(item, name, result) yaml = ruamel.yaml.YAML() input = Path('example.yaml') data = yaml.load(input) tasks = [] find_task(data, 'AzureKeyVault@2', tasks) assert len(tasks) == 1 print('>>> found <<<') yaml.dump(tasks[0], sys.stdout)which gives:>>> found <<< task: AzureKeyVault@2 displayName: Reading KVault inputs: azureSubscription: PIPPO KeyVaultName: PLUTO SecretsFilter: '*' RunAsPreJob: true
I've many yml (ADO devops) files that are different each ones. It means that I cannot query nodes/childs using a fixed tree structure. I have to get the AzureKeyVault@2 "- task:" stanza that could be on different levels and could be on different childs, on different subtree indented.Is there a way to extract the task attributes using pyyml when I cannot know the exact yml structure? The task is always named "AzureKeyVault@2".... .... - task: AzureKeyVault@2 displayName: Reading KVault inputs: azureSubscription: PIPPO KeyVaultName: PLUTO SecretsFilter: '*' RunAsPreJob: true ... ...I've tried this demo code but it doesnt workwith open("example.yaml", "r") as stream: prime_service = yaml.safe_load(stream) params = set() for key in prime_service.keys(): print(key) params.update(prime_service[key]["task"]) print(sorted(params))
How to get nested nodes using pyyml
I did a workaround for my problem using out-of-band solution.Basically, i use a group web hook to trigger a lambda function in aws which in turn will trigger the pipeline that i want to run in gitlab.This ensures that i'm still using the same runners as previously.Group Webhook -> AWS lambda -> trigger pipeline thru API
Currently, I'm using required pipeline configuration to "inject" a pipeline to all projects in a top-level group and sub-groups. This would allow me to run my pipeline first than run the pipeline at the local project level.However, required pipeline configuration isdeprecated in v15.9 and will be removed in v17.0.I understand that I can switch to compliance framework/pipeline. But I would need to manually make changes to each and every projects (Have tens of thousands) in the top-level groups and subgroups. And there is a possibility that new projects might be left out.The schema of the groups looked something like this:Top-level-Group|--Subgroup1|----Project1|----Project2|----Sub-Subgroup1|------Project3|--Subgroup2|----Project2I know there is aquestionon this previously. But the solution requires me to go to each and every project to include the "general CI/CD configuration file", which is very tedious and the possibility to miss out new projects is high.I want to be able to run my pipeline (not necessary in gitlab) for all projects in the top-level group and subgroups than continue whatever local pipeline is configured for the projects.Appreciate if anyone can provide any insights or recommendations. Currently, I have no idea on where/how to start.
Injecting a gitlab CI/CD pipeline at a group level
The errors you are experiencing with your YAML file are related to formatting, specifically with the spaces inside the braces. Here's a corrected version of your YAML snippet:- name: Rename directories to be replaced by symbolic links ansible.builtin.shell: command: "mv {{ item.original }} {{ item.new }}" args: chdir: /opt/aptitude/aptitude-21.1.1/ loop: - { original: 'backup', new: 'backupbkp' } - { original: 'db', new: 'dbbkp' } - { original: 'ini', new: 'inibkp' } - { original: 'lock', new: 'lockbkp' } - { original: 'log', new: 'logbkp' }Changes made:I added single quotes'around all the values in your dictionaries for consistency.I checked the indentation and made sure it follows YAML's requirements.Please try this corrected version in your YAML file. If you still encounter errors, it might be helpful to use aYAML linteror validator to check your entire file for any other potential issues as shown below-
Hi I all i m getting a yamlint error on my runbook and whatever i tried i couldnt pass the errors could you help lease? errors are in the loopI have tried removing the spaces or wrapping up items in double or single quotos, nothing seems to be working . i hope someone can help me- name: Rename directories to be replaced by symbolic links ansible.builtin.shell: command: "mv {{ item.original }} {{ item.new }}" args: chdir: /xx/tt/ll/ loop: - { original: 'backup', new: backupbkp } - { original: 'db', new:'dbbkp' } - { original: "ini", new: "inibkp" } - { original: "lock", new: "lockbkp" } - { original: "log", new: "logbkp" } 26:8 error too many spaces inside braces (braces) 26:43 error too many spaces inside braces (braces) 27:8 error too many spaces inside braces (braces) 27:36 error too many spaces inside braces (braces) 28:8 error too many spaces inside braces (braces) 28:39 error too many spaces inside braces (braces) 29:8 error too many spaces inside braces (braces) 29:41 error too many spaces inside braces (braces) enter code here
fix yaml lint too many spaces in a braces error?
These are thetimezonesthat ADF expression follows. Go through those and use the time zone as per your requirement.Use the below expression in a set variable activity. Here for sample I have givenIST.@concat(convertFromUtc(utcnow(), 'India Standard Time'),'.csv')Result:You need to use Dataset parameter for your dataset file name and give the above variable for it.
I need to write a file name which includes the current time in CST. how can I convert the timezone into CST in a file name of ADF.Let me know if anyone knows the solution. There are numerous function to convert in UTC in ADF but my requirement is to convert into CST and for that there is no such function I can find.
How to convert timezone in ADF to CST
You seem to be using a Web API filter instead of an MVC filter. Inherit your custom authorization filter fromSystem.Web.Mvc.FilterAttributeclass andSystem.Web.Mvc.IAuthorizationFilterinterface, and implementOnAuthorization()method.For example,using System.Web.Mvc; using Sitecore; namespace {YOUR_PROJECT_NAMESPACE} { public class AuthenticationRequired : System.Web.Mvc.FilterAttribute, System.Web.Mvc.IAuthorizationFilter { public void OnAuthorization(AuthorizationContext filterContext) { {YOUR_CUSTOM_CODE_LOGIC} } } }You can configure your custom filter in your application at three levels:Global levelby registering your filter inApplication_Startevent ofGlobal.asax.cs;Controller levelby decorating a controller with your filter by putting your filter at the top of the controller name;Action levelby decorating a given action method with your filter in a similar way as above for controllers.In your case you simply need to decorate your controller with your custom filter.Hope this helps.
Authorization filter attribute is not firingpublic class AuthenticationRequiredAttribute : System.Web.Http.Filters.AuthorizationFilterAttribute { public override bool AllowMultiple { get { return false; } } public override void OnAuthorization(HttpActionContext actionContext) { base.OnAuthorization(actionContext); }I have decorated controller with the attribute[AuthenticationRequired] public class ProfileController : BaseController { public ProfileController(IMyRepository repository) : base(repository) { } }TheAuthenticationRequiredfilter is not firing, what am I missing?
Sitecore10 MVC AuthorizationFilterAttribute OnAuthorization method is not firing
You can userulesand add a condition to your variable. The job will only run if the condition is met.job: script: // do something rules: - if: $MY_VAR == "string value"You have a panel of examples inthe doc.
I utilize the GitLab pipeline for automation testing, where I rely on a set of parameters (variables) for input into the test automation pipeline. The challenge lies in the fact that if a user inputs an incorrect value for any parameter, the job proceeds to execution and only fails some time later, and it would still consume resources and needs a runner for executing that stage.Is there a way to implement parameter validation before initiating the Gitlab pipeline manually? We had something similar from Jenkins from a "Validating String Parameter Plugin".I tried adding a stage: .pre but that would still consume resources and take a couple of minutes.
GitLab CI/CD Pipeline Variable Validation like Jenkins?
I could not get Pratik's solution to work.I had to do the following:A) Pass childItems to ForEach@activity('Get Metadata1').output.childItemsB) On the ForEach GetMetadata2, use a copy of the dataset. Add item.name to the filename value and select the Last Modified in the Field List:@item().nameC) Open the copy of the dataset and add to the filename@dataset().filename
I am trying to get the Last Modified date on the files (CSV linked service) in my repository using Get Metadata activity.Field ListHowever, I am getting the same date/time for all of the files... the LastModified date on the folder instead of the individual files inside.same LastModifiedDate output exampleMy design pattern is to Get the list of files and then iterate in a For Each over each item.Synapse Pipeline design patternI am able to get the Filenames with@activity('Get Metadata').output.childItemsI just don't see a LastModified date in the attributes of the Jason.This code needs to go a level deeper:@activity('Get Metadata').output.lastModifiedWhat is the syntax I am missing to access each of the file properties?
Azure Synapse Pipeline Get Metadata Last Modified Date on file instead of on folder
I did some work on generating alternative routes and I searched if I could do this on the client side so as not to increase the load during the pipe on the server side, and found a workable solution. I wanted to share this with you.First of all, the following site gave me an idea;https://mothereff.in/utf-8Afterwards, I reached the NodeJS package below and looked for a way to code from their code on the client side.https://github.com/mathiasbynens/utf8.jsSince the client application was developed with Delphi, I made the following change on the Delphi side and I got the solution. The solution I made wasUTF8ToStringdefinition.UTF8Encodeetc. functions did not provide a solution.UTF8ToString(decryptionStream.DataString)I hope it helps, if a different solution comes up, I will enjoy applying it.Thanks.
I get a stream of data with a query overPostgreSQL, then I encrypt this data inJSONformat with the help of cipher and send it directly to the user (chunk type). The problem here is that while cipher encodes, it does not encode the data asutf8. I am working as in the code below.pg-query-streamlibrary is used.updateProviderconnects to the pool method in thepglibrary.updateProvider.connect(async (err, client, done) => { if (err) throw err const query = new QueryStream('SELECT * FROM table') const stream = await client.query(query) stream.on('end', done) stream.pipe(JSONStream.stringify()).pipe(cipher).pipe(res) })My cipher usage is as follows:const cipher = await crypto.createCipheriv('aes-256-cbc', Buffer.from('y642GnzY61Has9hoKPAQereA3Jih3XiW'), Buffer.from('usLmN9ZtJn1D0ArF'));In this code, I want to encode the cipher with utf8 during the pipe or before. My only condition is not to lose big in terms of performance. I found solutions with the for loop, but I did not take it as a realistic solution.Thank you.
Setting Crypto Library to UTF8 with NodeJS Pipe
It's advaisable using AWS Systems Manager run command versus scp and ssh commands directly to the instance. Update the CodeBuild project role with the necessary permissions. Store files in a private S3 Bucket. To restrict access leverage IAM or resouse based Policies with conditions.https://docs.aws.amazon.com/systems-manager/latest/userguide/walkthrough-cli.htmlNotes:There is no way to restrict SG ingress rule to allow CodeBuild servise. It just doesn't make sense. Even if you get a pool of IPs used by CodeBuild service it doesn't look like a secure solution.VPC hosted CodeBuild instance? Maybe... But still not the most secure and as flexible as ssm connection.VPC Endpoints for CodeBuild? It works another way around.
I want to ssh to the aws ec2 server on port22. The port is open for all traffic. I want to restrict the traffic for onlyaws codebuildpipeline. How I can do this?I want to ssh to the aws ec2 server on port22. The port is open for all traffic. I want to restrict the traffic for onlyaws codebuildpipeline. How I can do this?
I want to copy files to AWS ec2 using buildspec.yml file, the 22 port is open for all the traffic. How to restrict the 22nd port for only codebuild?
It can be solved with use this html input:<input name='file' type='file' jsonaware='true'>And in pipeline i use this library for jenkins:https://github.com/janvrany/jenkinsci-unstashParam-library@Library("jenkinsci-unstashParam-library") _ pipeline { agent any stages { stage('Stage') { steps { script { if (object == 'variable') { sh "echo ${input}" //SOMETHING ELSE } else if (object == 'file') { def input_file = unstashParam "input" sh "cat ${input_file}" //SOMETHING ELSE } } } } } }
I'm trying to implement a pipeline in which, depending on the choice of parameter, the user sees either text input or file input. I use Active Choices Parameter: First:First ParameterSecond:First Parameterthen i need to get this values in pipeline code for example:pipeline { agent any stages { stage('Demo Active Choices Parameter') { steps { echo "${services}" } } } }with text input - it works how can i get the file in pipeline? thank you and sorry for english.
Use input file in Jenkins Active Choices Reactive References plugin
You can use this method with hybrid code Batch and PowerShell in order to get what you want as result :<# : Batch Script Section @rem # The previous line does nothing in Batch, but begins a multiline comment block in PowerShell. This allows a single script to be executed by both interpreters. @echo off Title Wifi Passwords Recovery by Hackoo 2023 & Mode 70,3 setlocal cd "%~dp0" Color 0B & echo( Echo( Please Wait a while ... Getting SSID and Wifi Keys ... Powershell -executionpolicy bypass -Command "Invoke-Expression $([System.IO.File]::ReadAllText('%~f0'))" EndLocal goto:eof #> # Powershell Script Section begin here... # here we execute our powershell commands... $Var=netsh wlan show profiles|SLS "\:(.+)$"|%{$SSID=$_.Matches.Groups[1].Value.Trim(); $_}|%{(netsh wlan show profile name="$SSID" key=clear)}|SLS "Conte.*:(.+)$"|%{$pass=$_.Matches.Groups[1].Value.Trim(); $_}|%{[PSCustomObject]@{SSID=$SSID;PASSWORD=$pass}} $var | Format-List | Out-File -FilePath ".\WifiKeys_List_Format.txt" $var | ConvertTo-Json | Out-File -FilePath ".\WifiKeys_JSON_Format.txt" $var | OGV -Title "Wifi Passwords Recovery by Hackoo 2023" -wait ii ".\WifiKeys_JSON_Format.txt" ii ".\WifiKeys_List_Format.txt"
Closed. This question needsdetails or clarity. It is not currently accepting answers.Want to improve this question?Add details and clarify the problem byediting this post.Closed8 months ago.Improve this questionI am trying to get all known WiFi SSIDs including the Password.When I use the following command called from a batch file, I can't get it to work. I get the following error:Can someone please help me?powershell -command "(netsh wlan show profiles) | Select-String "\:(.+)$" | %{$name=$_.Matches.Groups[1].Value.Trim(); $_} | %{(netsh wlan show profile name="$name" key=clear)} >> %computername%.txt" In Zeile:1 Zeichen:54 + ... ng \:(.+)$ | {(netsh wlan show profile name=$name key=clear)} >> MEHL ... + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Ausdrücke sind nur als erstes Element einer Pipeline zulässig. + CategoryInfo : ParserError: (:) [], ParentContainsErrorRecordException + FullyQualifiedErrorId : ExpressionsMustBeFirstInPipeline
Batch to call powershell command to read out all wifi networks including keys [closed]
You can call all the Notebooks from a parent notebook and call the Parent Notebook from ADF.Usedbutils.notebook.run("Notebook_name", <time in seconds>, {"Notebook_param1":"value1","Notebook_param2":"value2",...})in parent to call the particular Notebook.You can pass the parameters to notebook using the parameters option above.Sample:Parent Notebook:print("Calling Nb1") print(dbutils.notebook.run("Nb1", 60, {"Nb1_param":"Nb1_value"})) print("Calling Nb2") print(dbutils.notebook.run("Nb2", 60, {"Nb2_param":"Nb2_value"}))Nb1Notebook:a=dbutils.widgets.get("Nb1_param") dbutils.notebook.exit("Nb1_ exit with parameter "+a)Nb2Notebook:a=dbutils.widgets.get("Nb2_param") dbutils.notebook.exit("Nb2 exit with parameter "+a)You can return any values from the Notebook to the child if you want.Call this parent Notebook from ADF.If you want to pass values to the Notebooks, you can pass the parameters to the Parent Notebook and pass those in the child Notebooks.Notebook run:
I have 8 seperate notebooks in databricks, so I am currently running 8 different pipelines on ADF, each pipeline contains each notebook and so on. Is there a way to run a single pipeline which runs all notebooks, or is there a way to combine all notebooks into one and use that as master notebook so that it is easier to run pipeline only for master notebook which inturn runs all the notebooks?
Combine multiple notebooks to run single pipeline
If you check the documentation in the link (reference-test-config-yaml), you'll see it points you to the details page in the Azure portal. If you look there, you will notice it uses a GUID for the testId, not a string.
I have a yaml file for test config as follows:version: v0.1 testId: NAME testPlan: sampleTest description: 'Load Test' engineInstances: 1 failureCriteria: - avg(response_time_ms) > 15000 - percentage(error) > 10and in my pipeline yaml file I have AzureLoadTest@1 as following:- task: AzureLoadTest@1 inputs: azureSubscription: ${{ connection }} loadTestConfigFile: '$yaml path' resourceGroup: ${{ ResourceGroup }} loadTestResource: ${{ TestResource }} env: ...for some reason the pipeline created the load test in one resource group and as long as the testId is the same value as the one which is created the pipeline passes but if I wanna change the testId or run the pipeline with a new resource group it keeps failing with thiserror: `{ error: { code: 'TestNotFound', message: 'Test couldn\'t find with given identifier NAME', target: null, details: null } } ##[error]Error in getting the test.`is there a solution to solve this?the azure load testing doc shows that if the testId does not exist it should create onehttps://learn.microsoft.com/en-us/azure/load-testing/reference-test-config-yaml
Azure Load testing cannot create a new test
I was just being stupid and misunderstanding the documentation. I just needed to set thecatchInterruptionsoption to true (which is the default value).catchError(buildResult: 'UNSTABLE', stageResult: 'UNSTABLE', catchInterruptions: true) { // Do some stuff here and continue the build if anything fails }OR (because true is the default value)catchError(buildResult: 'UNSTABLE', stageResult: 'UNSTABLE') { // Do some stuff here and continue the build if anything fails }
In my Jenkins scripted pipeline, I am executing 1 stage with some tasks and then sending Slack messages. I want to send these Slack messages if the previous stage passes or fails and to do this I am using acatchError()and it works exactly as I need to. My problem arises when I want to abort the build.If I abort the build, I want to exit the build entirely and not send the Slack messages, but mycatchError()is catching this abort and then continuing the pipeline and sending the messages.stage('first stage with catchError') { catchError(buildResult: 'UNSTABLE', stageResult: 'UNSTABLE', catchInterruptions: false) { // Do some stuff here and continue the build if anything fails } } stage('second stage that send slack messages) { // Send some Slack messages based on the data produced in the first stage and based on if the tasks were a success or they failed }I thought I could use thecatchInterruptionsoption in thecatchError()to abort the entire build (here is a link for thedocs for this function), but that clearly hasn't resulted in the behaviour I want. What shall I do instead?
Jenkins: Aborting a build with a catchError()
Use sklearn preprocessing independently of the TensorFlow model in your training script. Afterward, save both your sklearn preprocessing steps and the TensorFlow model asONNX. Then, either feed the output of the preprocessing step as the input to the model in your .NET application or use theONNX helperto stitch both models together in advance.P.S.If you need a concrete example how to combine several onnx models into one file, you could refer tothisfile in myFalcon-MLlibrary where I have exactly the same use case.
I have a simply sequential model written in Python using TensorFlow library. As an input I have categorical and numerical columns and in output I'm getting float number.I would like deploy my model in Windows Application (.NET) and I am wondering how to deal with data encoders (eg. label encoder, normalization encoder).I seem to have at least two options:save the encoders somehow - how?add preprocessing layer in tf (I am personally for this option), but how? I am looking for a solution analogous to FeatureUnion/ColumnTransform from sklearn. Is it possible to use a preprocessing layer with the option of setting an encoder for each column separately? How?
Preprocessing data in TensorFlow
Here, I tried to give therequest bodyusing Json directly while usingGETrequest and it took it successfully as you see in below image. But when I tried to validate or publish it gave me error as below:REST connector doesn't support GET requests with request body.So, AFAIK to pass the request body you need to use post request.You can raise the feature request for this featurehere.
I am trying to get data from a REST API. According to the documentation, the API needs to have a date in the body like this:{"date": "2023-01-05"}It's working when I try it in Postman. When I try it in Azure Data Factory, it doesn't work.I have a copy of data activity, the source is a Linked service REST API call. The necessary headers are inserted.I have tried to insert the body in the additional headers, but that did not work.My question is:How/where can I insert the body of in the source of a copy data activity when the source is a REST API Linked service?Here is apictureof the source in the copy activity and the error code I am gettingError code:Rest call failed with client error, status code 422 422, please check your activity settings. Request URL:https://api.gastroplanner.eu/booking/v1/bookings. Response: {"message":"The given data was invalid.","errors":{"date":["The date field is required when year month is not present."]}}
Data factory: Inserting the body in a copy data activity source
I'm not sure if there is an in-built function or similar that you can call from within BB pipelines that will decline the PR automatically. However, note that even if there isn't a way to decline the PR in-built to BB pipelines, there still is anAPI endpointthat allows you to do it.You can check what theBITBUCKET_EXIT_CODEis and depending on that, you can call the endpoint. You only need thePR_IDto pass as a parameter, which you already have.
I am using bitbucket pipelines for PR raised pipeline, is there any way to write in bitbucket-pipelines.yml or in bitbucket settings to Cancel/Decline PR if pipeline gets failed at any step?There are variables like "BITBUCKET_PR_ID", "BITBUCKET_EXIT_CODE", which contains PR ID and exit status code, can we automate process by that?MAIN OBJECTIVE: To cancel the pull-request if any step get failed in Bitbucket-Pipelines.
Decline a PR when pipeline gets failed
If you have tried all the different Jira plugins out there and none of them work, you could consider directly calling the Jira API. The API is actually more stable than using a plugin, although it isn't as simple. If you are using Jira server, you can read about how to do ithere. If you are using cloud, follow the documentationhere.You should use thehttprequest pluginwith Jenkins to send the POST requests to the Jira API.
I'm shifting from maven jobs to pipeline jobs. in maven jobs we have usedJira update relevant issuesplugin in post build actions step which will update the jenkins current build status to all the jira issues mentioned in the commits messages.I'm trying to implement the same in pipeline script, i did tried something like below but it didn't help.stage('Jira: Update relevant issues') { steps { script { try { step([$class: 'hudson.plugins.jira.JiraIssueUpdater', issueSelector: [$class: 'hudson.plugins.jira.selector.DefaultIssueSelector'], scm: [$class: 'GitSCM', branches: [[name: '<branch name>']], userRemoteConfigs: [[url: '<clone url>']]]]) } catch (Exception e) { echo "Error updating Jira issues: ${e}" } } } }and i have installed multiple jira plugins in my jenkins env and tried using pipeline steps to utilize it, but no joy.Note: I did notice that everywhere in piplescript they were mentioning the jira issue id/key, but here whatever the commit changes triggered the jenkins build will have jira ticket in commit message. All those jira tickets should have updated with the build status in its comment section.
How to add jenkins build status as a comments in jira issues using pipeline script
I had the same problem after implementing my own pipeline. Digging into the error message:function in the 'src.pipelines.your_new_pipeline' module does not expose a 'create_pipeline' function, so no pipelines defined therein will be returned by 'find_pipelines'.To fix this, add the following atsrc/pipelines/your_new_pipeline/__init__.py:"""Expose the create_pipeline func!""" from .pipeline import create_pipelineThis allowskedroto effectively runfrom your_new_pipeline import create_pipeline.
ValueError: Pipeline contains no nodes after applying all provided filterssrc/pipelines/data_processing<br> src/pipelines/data_science<br> src/pipelines/data_project<br> src/pipelines/pipeline2<br> Kedro build-docs-> dont'work pip install pip-tools pip install -r src/requirements.txt pip install "kedro-datasets[<group>]" Nothing works [kedro v0.18.7] [Python 3.10.8]My project was working perfectly and at some point I tried to implement Modular Pipelines and then I ran the command to create a new pipeline. [kedro pipeline create pipeline2]The idea was to create a parallel pipeline to run with the main pipeline. [data-processing]Before running the project I realized that it was not necessary and ran the command to delete pipeline2 [kedro pipeline delete pipeline2]Since then my project stopped working I tried to create another pipeline with the name of my main pipeline, but without success.I tried to create another empty pipeline and pass my codes to [nodes.py] and [pipelines.py], also without success I tried running [kedro build-docs] to refactor, also to no avail. I do not know what else to do.[kedro run] returns [Pipeline contains no nodes after applying all provided filters]When I run the command [kedro registry list] it returns [default]
Kedro Pipeline contains no nodes after applying all provided filters
The Github DocUsing workflow run logs, says the following aboutfinding/downloadinglogs created byworkflows:You can download the log files from your workflow run. You can also download a workflow's artifacts. For more information, see"Storing workflow data as artifacts."Read access to the repository is required to perform these steps.You can follow the rest of the steps to find and download your desired files.
On my github action, it logged the following:The full lint text report is located at: /home/runner/work/honk/honk/app/build/intermediates/lint_intermediate_text_report/debug/lint-results-debug.txthonkis the name of my repo, but where do I find thelint-results-debug.txtfile?This is how myandroid.ymlfile is configured:name: Android CI on: [ pull_request ] jobs: build-and-test: runs-on: ubuntu-latest steps: - uses: actions/checkout@v2 - name: Set up JDK 1.11 uses: actions/setup-java@v1 with: java-version: 1.11 - name: Change wrapper permissions run: chmod +x ./gradlew - name: Build with Gradle run: ./gradlew build - name: Run tests run: ./gradlew test - name: Upload app to AppSweep with Gradle env: APPSWEEP_API_KEY: ${{ secrets.APPSWEEP_API_KEY }} run: ./gradlew uploadToAppSweepRelease
Where does github actions stores the log files?
If the jobsecurity-check"needs" the jobbuild, thensecurity-checkshould not run ifbuilddoes not run.Sosecurity-checkshould have the same "rules" asbuildso it will also run only onmainormaster.
I have built a pipeline where I only run thesecurity-checkjob, if the imagebuildwas successful. But the build only runs onmainormasterbranch. Therefore, when I push to branches which are not none of these two, the pipeline fails with the following error:Unable to create pipeline 'security-check' job needs 'build' job, but 'build' is not in any previous stageI am usingneeds: ["build"] when: on_successin order to create a dependency.Is there a way to skip thesecurity-checkinstead of failing the whole pipeline?
How to skip a job instead of failing pipeline in Gitlab
ThecloudFiles.allowOverwritesmay help you. Perdocumentation:Whether to allow input directory file changes to overwrite existing data. Available in Databricks Runtime 7.6 and above.But then you will need to handle duplicates inside your data processing pipeline.
I am currently setting up a data pipeline in databricks. The situation is as follow:Incoming data comes as json-files. Data is being fetched asynchronously to the filestore. In case data is received multiple times a day, this is put into the same json-files.The pipeline is triggered once a day. As far as I understood it, if the pipeline is executed before all data of one day is collected, then the file is already marked as processed and will not be re-evluated even though new data was incoming after the execution of the pipeline. Resulting in the situation, that the delta tables are missing this data.Is there any way of fixing this behavior?
Databricks processed files
Parsing a 1GB xml file requires a significant amount of memory in your workers.Looking at your pipeline JSON, your pipeline is currently configured to allocate 2GB of ram per worker."config": { "resources": { "memoryMB": 2048, "virtualCores": 1 }, "driverResources": { "memoryMB": 2048, "virtualCores": 1 }, ... }This is likely insufficient to hold the entire parsed ~1.1GB json payload.Try increasing the amount of executor memory in theConfig -> Resources -> Executorsection. I would suggest trying with 8 GB of ram for your example.EDIT: When using theDefaultorAutoscalingcompute profile, CDF will create workers with 2 vCPU cores and 8 GB of Ram. You will need to increase this value using the following runtime arguments:system.profile.properties.workerCPUs = 4 system.profile.properties.workerMemoryMB = 22528This will increase the worker size to 4 vCPU and 22GB of RAM, which will be large enough to fit the requested executor in the worker.
When transforming an xml file to json, the Data Fusion pipeline, configured in Autoscaling mode up to 84 cores, stops indicating anerror.Can anybodyhelp meto make it work?The 100-pages Raw log file seems indicating that possible errors were:+ExitOnOutOfMemoryErrorContainer exited with a non-zero exit code 3. Error file: prelaunch.errIt happened with the following configuration:Data Fusion pipelineRaw log fileXML_1.4GB fileThe weird thing is that the very same pipeline, with an xml file 10-times smaller, of only 141MB, worked correctly:xml_141MB_fileCan anybody help me in understanding why the Cloud Data Fusion pipeline, set in Autoscaling mode up to 84 cores, succeeds with the 141MB xml file and it fails with a 1.4GB xml file?For clarity, following all the detailed steps:my github
Data Fusion for xml-to-json transformation: "+ExitOnOutOfMemoryError" and "exited with a non-zero exit code 3. Error file: prelaunch.err"
I was able to resolve this error by refactoring the pipeline to use 'Build Matrices':https://buildkite.com/docs/pipelines/build-matrixand 'Group steps' in the configuration:https://buildkite.com/docs/pipelines/group-step.
I am working on a Buildkite CICD pipeline that occasionally has too many jobs to run and results in the following error:Failed to upload and process pipeline: Pipeline upload rejected: The number of jobs in this upload exceeds your organization limit of 500. Please break the upload into batches below this limit, or contact support to discuss an increase.Is there a way to batch upload the pipeline steps for a single pipeline in order to avoid this error?
How can I reduce the number of jobs in a Buildkite pipeline upload?
Finally, the problem solved.First, I've checked the related fields on the pipeline.Then I wondered what is gonna happen after ignoring the failures.Saved the pipeline and checked the logs.unable to acquire authentication token for tenant:xxx: refreshing spt token: : Refresh request failed. Status Code = '401'. Response body: {"error":"invalid_client","error_description":"AADSTS7000222: The provided client secret keys for app 'xxxx' are expired. Visit the Azure portal to create new keys for your app:https://aka.ms/NewClientSecret, or consider using certificate credentials for added security:https://aka.ms/certCreds.\r\nTraceThis means the secret key is expired. Created new key and edited the integration. Problem solved.
I suddenly started to getting the error:field [o365audit] not present as part of path [o365audit.CreationTime]Integration: Office 365 Logs Version: 1.4.1 Agent Version: 7.17.3I cannot get the logs, could you please help?I also tried to change the ingest pipeline to see fromo365audit.CreationTimetoo365.audit.CreationTimeBCS of the log schemabut this time gave the errorfield [o365] not present as part of path [o365.audit.CreationTime]
field [o365audit] not present as part of path [o365audit.CreationTime]
Try this:Get-Process | Where-Object { $_.ProcessName -match 'acc' } | Select-Object -Property id, Name | % {$_}By doing a foreach-object (coded as %) you enable processing for each object. You could also put in $.name or $.id.
This question already has answers here:Why can't I use $_ in write-host?(3 answers)Closedlast year.It must be obvious but I do not get it being fairly new in PowerShell, can anybody explain why I can not 'pipe' the outcome of the following commands further:Get-Process | Where-Object { $_.ProcessName -match 'acc' } | Select-Object -Property id, NameThis gives an out put as:Id Name -- ---- 11744 MSACCESSI would expect I can do the following:Get-Process | Where-Object { $_.ProcessName -match 'acc' } | Select-Object -Property id, Name | write-host $_.nameorGet-Process | Where-Object { $_.ProcessName -match 'acc' } | Select-Object -Property id, Name | $_I suppose I miss something here. Thx to all.
All, could you help me understanding the piping outcome in powershell [duplicate]
In the root directory of your gitlab runner installation, often c:\gitlab-runner you'll see a config.toml file. You need to change pwsh (powershell core) to powershell (powershell).[[runners]] executor = "shell" shell = "pwsh"to[[runners]] executor = "shell" shell = "powershell"
i am trying to use gitlab runner installed on my windows machine . But pipeline execution fails with error: ERROR: Job failed (system failure): prepare environment: failed to start process: exec: "pwsh": executable file not found in %PATH%. Checkhttps://docs.gitlab.com/runner/shells/index.html#shell-profile-loadingfor more informationI am using "shell" as executor while registering runner. I have also setup path variables correctly.I have setup path varialbes correctly in system variable
Gitlab runner configuration on windows
Try the following. Checkthisissue for more details.withEnv(['JENKINS_NODE_COOKIE=dontkill']) { sh "nohup java -jar /home/oomnpe/workspace/oomnpe_bot/target/oomnpe_bot-1.0-jar-with-dependencies.jar &" }
The question is this: I need to run a jar file on the node, in the Jenkins pipeline I writestage('Start bot') { steps { sh 'nohup java -jar /home/oomnpe/workspace/oomnpe_bot/target/oomnpe_bot-1.0-jar-with-dependencies.jar' } }But the build goes on endlessly after launching the jar, showing the logs of the application, if you make requests to it. If I stop the build, then the application also stops.How to make the jar run on the remote machine and the build stop? Everywhere they write about "nohup", but I use it and there is no result.
How to run jar through jenkins as a separate process?
Based on the information I was able to find and many hours of debugging, I ended up using a workaround in which I ran thegolangcommands in aCmdLine@2task, instead. Due to howGoTool@0sets up the pipeline and environment, this is possible.Thus, the code snippet below worked for my purposes.steps: - task: GoTool@0 inputs: version: '1.19.0' - task: CmdLine@2 inputs: script: 'CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build' workingDirectory: '$(System.DefaultWorkingDirectory)'
I am currently migrating some build components toAzure Pipelinesand am attempting to set some environment variables for all Golang related processes. I wish to execute the following command within the pipeline:CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build [...]When utilizing the providedGolangintegrations, it is easy to add arguments forGorelated processes, but setting an environment variable for all (or for every individual)Goprocess does not seem possible. NeitherGoToolor the defaultGotask seem to support it, and performing ascripttask with a shell execution in it do not seem to be supported either.I have also tried adding an environment variable to the entire pipelines process that defines the desired flags, but these appear to be ignored by theGotask provided by Azure Pipelines itself.Would there be a way to do add these flags to each (or a single) go process, such as how I do it in the following code block (in which theflagsinput line was made-up by me)?- task: Go@0 inputs: flags: 'CGO_ENABLED=0 GOOS=linux GOARCH=amd64' command: 'build' arguments: '[...]' workingDirectory: '$(System.DefaultWorkingDirectory)' displayName: 'Build the application'
Injection of Golang environment variables into Azure Pipeline
Ok, found it.I was going in the wrong direction: cloudformation templates have a dedicatedOutputssection that is made exactly to output desired information about the stack.As described inthis SO question, those values can be retrieved with thedescribe-stacksaws cli command, e.g.:aws cloudformation describe-stacks --stack-name my-stack-01 --query <query>So there's no need to have values being passed directly between pipelines, I just can retrieve them in a dedicated job in the child pipeline.
The use caseTwo projects hosted on Gitlab: a simple single page application and an AWS stack for a serverless backend.Having two distinct projects makes the permission handling more secure and flexible since we can have two distinct teams (frontend and devops) each one working independently on its own part of the whole project.This bringes the needs to have the aws project pipeline to pass down the references to the aws resources (e.g. the S3 buckets where to deploy the spa static assets for staging and development or the gateway stages urls for api calls) to compile the spa static assets.The problemI know from thedocumentationthat a parent pipeline can pass variables to child pipelines when executed, but in my use case its expected that the spa project pipeline will be triggered more often by direct commits than the parent pipeline, so I'm wondering how to provide the S3 reference in these cases.Does this make sense or am I missing something and going in the wrong direction?
Gitlab CI/CD - re-use old variable in child pipeline without being triggered by parent pipeline
As far as I know (I am still new to our ADO solutions), we have to fully run/schedule the pipeline to see that it runs and then wait a day or so for the scheduler to complete the auto executions. At this point I do have some failing pipelines for a couple of days that I need o fix.I do get emails when a pipeline fails like this in the json that holds the metadata to create a job: "settings": { "name": "pipelineName", "email_notifications": { "on_failure": [ "[email protected]" ], "no_alert_for_skipped_runs": true },Theres an equivalent extension that can be added in this link, but I have not done it this way and cannot verify if it works.Azure Pipelines: Notification on Job failurehttps://marketplace.visualstudio.com/items?itemName=rvo.SendEmailTaskI am not sure what actions your pipeline does but if there are jobs being scheduled there on external computes like Databricks, there should be a email alert system you can use to detect failures.Other than that if you had multiple environments (dev, qa, prod) you could test in non production environment. Or if you have a dedicated storage location that is only for testing a pipeline, use that for the first few days and then reschedule the pipeline in the real location after testing it completes a few runs.
When changing the pipelines for my company I often see the pipeline breaking under some specific condition that we did not anticipate. We use yaml files to describe the pipelines (Azure Devops)We have multiple scenarios, such as:Pipelines are run by automatic triggers, by other pipelines and manuallyPipelines share the same templatesThere are IF conditions for some jobs/steps based on parameters (user input)In the end, I keep thinking of testing all scenarios before merging changes, we could create scripts to do that. But it's unfeasible to actually RUN all scenarios because it would take forever, so I wonder how to test it without running it. Is it possible? Do you have any ideas?Thanks!I already tried thePreviewendpoints from Azure REST api, which is good, but it only validates the input, such as variables and parameters. We also needed to make sure which steps are running and the variables being set in those
How to test my pipeline before changing it?
From the error message, assume you have set up your variable on run time but this needs to be known at compilation time. Run time variables aren't supported for service connection OR azure subscription. You could refer to this ticket:DevOps Pipeline AzureCLI@2 with dynamic azureSubscriptionandhttps://github.com/microsoft/azure-pipelines-tasks/issues/10376for more details.As a workaround, you could useparametersin your pipeline.parameters: - name: azuresubscription type: string values: - xxxx jobs: - job: A steps: - task: AzureCLI@2 inputs: azureSubscription: '${{ parameters.azuresubscription }}' inlineScript: | az keyvault secret show --vault-name xxxx scriptLocation: 'inlineScript' scriptType: 'bash' failOnStandardError: true
My team have a task AzureCLI@2:- task: AzureCLI@2 displayName: 'KEY VAULT - Get Secrets' inputs: azureSubscription: '${{ variables.azuresubscription }}' inlineScript: | secrets=$(az keyvault secret list --vault-name $(postDeploy.kvName) --query "[].name" -o tsv) for secret in $secrets; do pwd=$(az keyvault secret show --name $secret --vault-name $(postDeploy.kvName) -o tsv --query value) echo "##vso[task.setvariable variable=${secret};issecret=true]${pwd}" done scriptLocation: 'inlineScript' scriptType: 'bash' failOnStandardError: truebut '${{ variables.KeyVaultName }}' not working ocasionally exception:I try others forms for inject subscription too failure, example inject in env.
Porblem in insert Subscription in AzureCli
For the latter, you are missing the import as stated in the project's README:https://github.com/qualersoft/jmeter-gradle-plugin#user-content-running-a-jmeter-testImporttaskpackage.import de.qualersoft.jmeter.gradleplugin.task.* plugins { id "de.qualersoft.jmeter" version "2.1.0" }
I'm trying to run some jmeter tests in my jenkinsfile pipeline, but I'm getting some errors.A problem was found with the configuration of task ':jmReport' (type 'TaskJMReports'). - In plugin 'net.foragerr.jmeter' type 'net.foragerr.jmeter.gradle.plugins.TaskJMReports' property 'reportDir' is missing an input or output annotation.This is how I'm trying to run it.build.gradleplugins { id "net.foragerr.jmeter" version "1.0.5-2.13" } apply plugin: 'net.foragerr.jmeter' jmeter { jmTestFiles = [file("src/test/jmeter/TestPlan.jmx")] enableExtendedReports = true //produce Graphical and CSV reports }Pipelinestage('Run Non-Functional tests - Windows'){ when { expression { env.OS == 'BAT' }} steps { dir('') { bat 'gradlew.bat jmReport' } } }I'm also tried this away.build.gradleplugins { id "de.qualersoft.jmeter" version "2.1.0" } tasks.register('jmRun',JMeterRunTask) { jmxFile.set("TestPlan.jmx") } tasks.register("jmReport",JMeterReportTask) { jmxFile.set("TestPlan.jmx") dependsOn("jmRun") deleteResults=true }The stage is the same and I'm getting this error.> Could not get unknown property 'JMeterRunTask' for root project 'flowcrmtutorial' of type org.gradle.api.Project.Why am I getting this errors?
Running jmeter tests in jenkinsfile
I don't thinkwhencan do that. If you don't mind a bit of scripting you can do it like thispipeline { agent none stages { stage('build') { agent { label env.GIT_BRANCH == 'master' ? 'Solaris' : 'Linux' } steps { sh 'hostname' } } } }Assuming you are storing the pipeline in git. I'm sure Bitbucket has a similar env variable.
Our project is written in java and deployed on to a Solaris environment in production. But , most of our test and dev machines are in Linux and so are our most Jenkins agents.I'm looking for a way to run the Jenkins build in a Solaris agent only when the branch is master/release_branch and choose a Linux agent when the branch is something else.Idea is to ensure we haven't introduced any compatibility issues in Solaris. I'm looking for a declarative pipeline approach something like this, but will also select linux agent when condition is not met.stage('build') { steps { mvn clean } when { branch comparator: 'EQUALS', pattern: 'master' beforeAgent true } agent { label 'Solaris' } }
How to select a jenkins agent for a build based on github branch?
Make sure the drive nameZis not taken on Deployment group VM.When you sayin local i can able to run the script successful, do you mean you run on the deployment group VM? If not, try to run the script to check if it's successfully, check network, port.You can check thecode samplewhich is successful, double check if password is correctly set if you are using secret variable in the pipeline.If the issue persists, please share more info like screenshot, log.
Im using azure devops release pipeline. Im planning to map the azure Fileshare to all VMS in deployment groups. I have the script from the portal to map fileshare in local. Replacing the passkey value in that script I'm passing as variable. Tried to check and the value is getting. But in results it shows mapped with the drive letter. When I opened the file explorer it shows disconnected and unable to eject the sharepath shows "This network connection does not exist" Also it is not even mapped. If any issue in getting keys from variable means it should throw error without showing drive create status.Looking for help is any step or mistake happened in the pipeline? Or in script?Note: in local i can able to run the script successful and drive mapped successful as well.Script: cmd.exe /C "cmdkey /add:"storageaccount.file.core.windows.net" /user:"localhost\storageaccount" /pass:"accesskeyforstorageaccount" New-PSDrive -Name Z -PSProvider Filesystem -Root "\storageaccount.file.core.windows.net\fileshare-name" -Persist
Unable to map azure file shares using azure pipelines to deployment group vm's
On success, exec-family functions such asexeclp()replacethe currently running program image with the requested one. They return only on failure to do that. Your functionforks once and performs anexeclp()in both parent and child, thus, yes, if everything succeeds then the original program does not process any more input itself. It no longer exists.To avoid that, you need to fork a child foreachcommand that requires an exec. Also, for a pipeline with n > 2 commands, you will need n - 1 pipes for their communication.Furthermore, you need to correctly handle the error case ofexeclp()returning (at all). At minimum, a child process in which that happens should terminate quickly, probably by calling_exit(1). I would advise also emitting a diagnostic message, however, probably by use ofperror().
I am implementing a pipeline in shell i.e: ls|sortthe shell has many commands like DIR, COPY, | - means pipeline, and so on...When I run this command (ls|sort) its worksThe Problem is:the shell exit and doesn't wait for the next commands in the shell after the command in the pipeline function.I think because is something with the STDIN, STDOUT when I close them or somethingNote that the pipeline only takes 2 commands.Here is my code for the pipeline part:void Pipeline(char *input) { char delim[] = "|"; char *token; char *vec[1024] = {0}; int k = 0; for (token = strtok(input, delim); token; token = strtok(NULL, delim)) { vec[k++] = token; } vec[k] = NULL; int fd[2]; pid_t pid; if(pipe(fd) == -1){ perror("pipe"); exit(EXIT_FAILURE); } pid = fork(); if(pid == -1){ perror("fork"); exit(EXIT_FAILURE); } if (pid == 0){ close(fd[0]); //close read from pipe, in parent dup2(fd[1], STDOUT_FILENO); // Replace stdout with the write end of the pipe close(fd[1]); // Don't need another copy of the pipe write end hanging about execlp(vec[0], vec[0], NULL); } else{ close(fd[1]); //close write to pipe, in child dup2(fd[0], STDIN_FILENO); // Replace stdin with the read end of the pipe close(fd[0]); // Don't need another copy of the pipe read end hanging about execlp(vec[1], vec[0], NULL); } }
PipeLine Shell in c when com,mend happen the shell close
Nature of the problemThis happens by default because your experiment tries toupload the source codeinto its default bucket (which is the jumpstart bucket).You can check the default bucket assigned to pipeline_session by printingpipeline_session.default_bucket().Evidently you do not have the correctwrite permissions on that bucketand I encourage you to check them.When you comment the entry_point, it doesn't give you that error precisely because it doesn't have anything to load. However, the moment it tries to do inference, it does not find the script clearly.One possible quick and controllable solutionIf you want to apply a cheat to verify what I told you, try putting thecode_locationparameter in theModel.This way you can control exactly where your pipeline step goes to write. You will clearly need to specify the s3 uri of the desired destination folder.code_location(str) – Name of the S3 bucket where custom code is uploaded (default: None). If not specified, the default bucket created by sagemaker.session.Session is used.
I encountered this error when running pipeline.upsert()S3UploadFailedError: Failed to upload /tmp/tmpexrxqr32/new.tar.gz to jumpstart-cache-prod-ap-southeast-1/source-directory-tarballs/lightgbm/inference/classification/v1.1.2/sourcedir.tar.gz: An error occurred (AccessDenied) when calling the CreateMultipartUpload operation: Access DeniedMy pipeline consists of preprocessing, training, evaluating, creating model and transforming step. When i ran these steps seprarately they were working just fine, but when I put them together in a pipeline, the mentioned error occured. Can anyone tell me what is the cause of this error, I did not write any line of code to upload anything to Jumpstart S3.model = Model( image_uri=infer_image_uri, model_data=step_train.properties.ModelArtifacts.S3ModelArtifacts, sagemaker_session=pipeline_session, role=role, source_dir=infer_source_uri, entry_point="inference.py" )When I comment out the entry_point line, pipeline.upsert() returned no error, but the transform job failed. The model I'm using is JumpStart LightGBM.
Sagemaker Pipeline Error: Failed to upload to jumpstart-cache-prod-ap-southeast-1/source-directory-tarballs/lightgbm/inference/classification
The issue was two-fold. Firstly the service principle was not configured with admin rights so could not create the sub-folder in /Repos. Once this was fixed, I got a different error when issuing the post command trying to create the repo (in the newly created sub-folder). The error I got was:{"error_code":"PERMISSION_DENIED","message":"Missing Git provider credentials. Go to User Settings > Git Integration to add your personal access token."}The solution to this permissions issue has already been answeredhere
I am developing an Azure pipeline and want to create (https://docs.databricks.com/dev-tools/api/latest/repos.html#operation/create-repo) a repo in Databricks and save it to /Repos/sub_folder/repo_nameTo test the commands in the pipeline, I am using the Databricks cli and repos API (as described in the link above) locally from my PC. This all works fine and all the repo files are saved into the subfolder under the root /Repos folder.When I try this in the pipeline, running as the Service Principle, the pipeline fails when trying to create a subfolder under /Repos. In the pipeline, I am issuing the following Databricks CLI command:databricks workspace mkdirs /Repos/sub_folderThe error shown isError: Authorization failed. Your token may be expired or lack the valid scopeIs there some further configuration, or permissions, required to allow the Service Principle create a folder/save files under /Repos?PS. I would use the /Shared workspace, instead of /Repos, but saving to /Shared does not seem to work with the "Files in Repos" feature (https://learn.microsoft.com/en-us/azure/databricks/repos/work-with-notebooks-other-files), which I need to access non-notebook files and run my actual model.Any suggestions much appreciated...
Azure Databricks workspace CLI - cannot create new folder in /Repos folder as service principle
I also need to know the downstream pipeline status, there is an internal feature request inside of gitlab, unfortunately, 3 years have gone they still don't move forward. We can vote to pop it up.https://gitlab.com/gitlab-org/gitlab/-/issues/31566
Currently i would like to trigger a pipeline that is part of another project. So one way could be to use multi-project pipelines via trigger keyword. The problem is that if i do it with this way then i have to be owner or maintainer in the downstream repo. This is not possible because the downstream repo has SSH_KEYS that must be hidden.So the other way i followed is this:Made desired branch protected in 'Downstream' repositoryCreated Pipeline triggers in 'Downstream' project : Settings -> CI/CD -> Pipeline triggers.As an Admin - located this token as a Group CI/CD variable (or in the project which will trigger 'Downstream' project) - Group Settings -> CI/CD -> Variables.Used it like below:curl -X POST \ --form token=$YOU_TRIGGER_TOKEN_VARIABLE \ --form ref=target_branch \ "$CI_API_V4_URL/projects/55/trigger/pipeline"Is there any way to track the status of the triggered pipeline? Currently, there is no UI showing the triggered pipeline so how i should know if this was successful or not? There is also a strategy attribute (strategy:depend) when triggerring a child pipeline using trigger:project but it's not supported when using trigger API.
How can i track the status of the downstream pipeline when i use the trigger API?
Check if this is still the case in June 2023:Roblox/setup-foreman PR 39did update Node version in Apr. 2023 with theirv2.1 releaseTheiraction.ymlnow include the correct version:name: 'Setup Foreman' description: 'Install Foreman, a toolchain manager for Roblox projects' author: 'The Rojo Developers' inputs: version: required: false description: 'SemVer version of Foreman to install' working-directory: required: false description: 'Working directory to run `foreman install` in' token: required: true description: 'GitHub token from secrets.GITHUB_TOKEN' runs: using: 'node16' main: 'dist/index.js'
I'm currently setting up a simple CI pipeline for a project on Github but I've run into a problem. When I run a Selene Lint check, my code does get successfully integrated but I get this warning:Node.js 12 actions are deprecated. For more information see:https://github.blog/changelog/2022-09-22-github-actions-all-actions-will-begin-running-on-node16-instead-of-node12/.Please update the following actions to use Node.js 16: Roblox/setup-foreman@v1Here is myci.yaml:name: CI Pipeline on: push: branches: - main pull_request: branches: - main jobs: lint: name: Selene Lint runs-on: ubuntu-latest steps: - name: Checkout code uses: actions/checkout@v3 - uses: Roblox/setup-foreman@v1 name: Install Foreman with: version: "^1.0.0" token: ${{ SECRETS.GITHUB_TOKEN }} - name: Selene Lint run: | selene ./srcDoes anyone know how I can set up Foreman using Node.js 16?Unfortunately, this is my first time setting up a CI Pipeline so I am not to familiar with how to approach this issue.
Pipeline actions deprecated (Node.js 12)
use following grok pattern :(?<containerName>[a-zA-Z0-9._-]+).*?(?<timestamp>%{YEAR}\/%{MONTHNUM}\/%{MONTHDAY} %{TIME}) %{GREEDYDATA:message}it works
I am trying to put logs messages from all containers to elastic search but I suggest a lot of them not in JSON format I trying to parse them with simple grok filter parameters but I see a lot of container names in final msg and grokparsefail statusif [type] == "filebeat-docker-logs" { grok { match => { "message" => "\[%{WORD:containerName}\] %{GREEDYDATA:message_remainder}" } }
How to parse non-json messages in logstash with grok filter
The NodeJS default version you get from docker hub's is aDebian-baseddistribution. Therefore,aptorapt-getmust be used as a package management tool.You must use analpine-basedNode container if you want to useapk(Alpine Package Keeper).Exampleimage: node:lts-alpineThe NodeJS docker container is built using a variety of versions and underlying Linux distribution. Please click thelinkto view them, then select what you require based on your requirements.
Im trying to build a pipeline which ssh's into my server, and executes a bash script. But when i execute the pipeline, it states:bash: line 129: apk: command not foundWhy though? Im already specifying to use alpine over my default node imageMy runner is configured as a shell executorimage: node:16 build: only: - main script: - docker build --file Dockerfile --tag $IMAGE_NAME . push_to_dockerhub: only: - main script: - docker login --username $DOCKERHUB_USERNAME --password $DOCKERHUB_PASSWORD - docker push $IMAGE_NAME deploy: image: alpine:3.14 only: - main before_script: - apk add openssh-client - eval $(ssh-agent -s) - echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add - - mkdir -p ~/.ssh - chmod 700 ~/.ssh script: - ssh myusername@$IP_ADDRESS -p 3000 "/home/myusername/deploy.sh && exit"I need to be able to use apk on this pipeline to add the openssh client and therefore, stablish an ssh connection
Why cant i use apk package manager in my Gitlab pipeline?
I think your Pipeline is already well-optimized. IMHO trying to parallize it further will not yeild better performance, rather will add more complexity to the Pipeline.
I need to create a parallel pipeline that has the following steps:Execute the Integration Tests;Generate the Integration Tests HTML Report;Publish the Integration Tests HTML Report on Jenkins;Generate the Integration Tests HTML Coverage Report;Publish the Integration Tests HTML Coverage Report on Jenkins;The same steps for integration tests also should be done to mutation tests;Deploy the application (jar file) to a pre-configured staging server (Tomcat Server instance);Perform a automatic smoke test that will consist in performing a curl to check if the base URL of the application is responsive after deployment;A UI Acceptance Manual Test will be performed in the following way. A user will be notified of the successful execution of all the previous tests and be asked to perform a manual test. In order to cancel the progression or proceed, a UI Acceptance Manual Test must take place. The pipeline should wait for a user manual confirmation on Jenkins;A tag shall be pushed to my SCM ( Source Control Management) repository with the Jenkins build number and status.For now I only a initial design of what I want my pipeline to be like. I took the decision to generate and publish the Javadoc in parallel with the mutation and integration tests since these tests don't need the Javadoc to be done. I think that I can parallelize my pipeline more, what do you guys think and what's your opinion on my desing?
How can I improve my pipeline efficiency?
It depends which Action you used.It is recommended to start with "Deploying Node.js to Azure App Service" and its prerequisites (creating an Azure App Service plan, creating a web app, configuring an Azure publish profile and create anAZURE_WEBAPP_PUBLISH_PROFILEsecret.)You can then create your workflow, following the example "deployments/azure-webapps-node.yml" as a possible template, for you to adapt.
I am trying to deploy my backend code to azure app services using the GITHUB actions, the code has been committed and in the action, yml file is also generated, and after clicking the build it fails with an error.Error: Process completed with exit code 1.
Build fails while deploying from GitHub to azure app service (node JS Twilio backend)
Thanks for the response @m-eriksen and @Perry. Yes...The reason for this error was I was running bash commands in PowerShell. So I changed bash commands with PowerShell commands to achieve my task as follows .- '$source = "release"' - '$destination = "ftp://username:password@hostname/destination"' - '$webclient = New-Object -TypeName System.Net.WebClient' - '$files = Get-ChildItem $source' - 'foreach ($file in $files){Write-Host "Uploading $file" $webclient.UploadFile("$destination/$file", $file.FullName)}' - '$webclient.Dispose()'I know this is not the ideal solution. But as I am playing with Gitlab this will really help me to start with it.
I have setup Gitlab pipeline for my repository. I am getting error on deployment. Below is my deploy script.script:# cd to where csproj is - cd $deploy_path # publish the files - this will generate the publish files in bin/release - dotnet publish -c release # install zip and lftp - apt-get update -qq && apt-get install -y -qq zip lftp # cd to bin - cd bin # zip release, name zip CreativelyCode.zip - mkdir prep - zip -r CreativelyCode release # upload file to ftp - lftp -e "set ssl:verify-certificate no; lpwd; open $FTP_HOST; user $FTP_USERNAME $FTP_PASSWORD; put -O /files/ CreativelyCode.zip; bye"Below is the error screenshot.If I make separate app-get commands, I get below error.I have usedthisas reference for my script. I trying to deploy files to FTP server. Any help is greatly appreciated. Thanks.
Gitlab - Getting error while deploying .net core application. apt-get package giving error
You can start andecho $VM_IPADDRESSto check if the IP variable is properly instanciated."failed to run"Then it depends on the error message (or if the commands simply froze).Before the keyscan, you can test if the network route is reachable from your GitLab-CI runnner with:curl -v telnet://$VM_IPADDRESS:22If it does not connect immediately, that would explain why thessh-keyscanfails.
The following code was from my deploy stage in my .gitlab-ci.yml file.deploy_website: stage: deploy artifacts: paths: - public before_script: - "command -v ssh-agent >/dev/null || ( apk add --update openssh )" - eval $(ssh-agent -s) - echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add - - mkdir -p ~/.ssh - chmod 700 ~/.ssh - pwd && ls - ssh-keyscan $VM_IPADDRESS >> ~/.ssh/known_hosts - chmod 644 ~/.ssh/known_hosts script: # - apk add bash # - ls deploy # - bash ./deploy/deploy.sh - ssh $SSH_USER@$VM_IPADDRESS "hostname && echo 'Welcome!!!' > welcome.txt"This line "ssh-keyscan $VM_IPADDRESS >> ~/.ssh/known_hosts" failed to run when I execute my pipeline.
Unable to create ~/.ssh file using .gitlab-ci.yml
After researched and tried many ways to fix that, my solution is (for who got the same problem):assembleDebug: stage: build script: - JAVA_OPTS="-Xmx1536m -XX:MaxPermSize=2048m -XX:+HeapDumpOnOutOfMemoryError -Dfile.encoding=UTF-8" - ./gradlew --status - ./gradlew --stop - ./gradlew assembleDebug --no-daemon -Dkotlin.compiler.execution.strategy="in-process" artifacts: paths: - app/build/outputs/apk/dev/debug
I have tried to create pipelines for building my android app with android 13 (API level 33) on my gitlabs but I got an issue with that:aapt2 W 09-29 11:28:13 896 896 LoadedArsc.cpp:682] Unknown chunk type '200'.Error log:----- End of the daemon log ----- FAILURE: Build failed with an exception. * What went wrong: Gradle build daemon disappeared unexpectedly (it may have been killed or may have crashed) * Try: > Run with --stacktrace option to get the stack trace. > Run with --info or --debug option to get more log output. > Run with --scan to get full insights. * Get more help at https://help.gradle.org > Task :app:compileDevReleaseKotlin ERROR: Job failed: exit code 1Here is my pipelines config file:image: openjdk:11-jdk variables: ANDROID_COMPILE_SDK: "33" ANDROID_BUILD_TOOLS: "33.0.0" ANDROID_SDK_TOOLS: "8512546" cache: paths: - .gradle/ before_script: - apt-get --quiet update --yes - apt-get --quiet install --yes wget tar unzip lib32stdc++6 lib32z1 - export ANDROID_HOME="${PWD}/android-home" - install -d $ANDROID_HOME ... assembleDebug: interruptible: true stage: build script: - ./gradlew assembleDebug artifacts: paths: - app/build/outputs/apk/dev/debugThank you for your help.
[Android][CI/CD][pipelines] aapt2 W 09-29 11:28:13 896 896 LoadedArsc.cpp:682] Unknown chunk type '200'
Make sure your private key (whose public key isadded to your Gerrit server) is properly registered through theJenkins SSH Credential pluginTry a simple job executed on that agent in order to testssh -Tv gerrit@xxxxwith anSSH step, using the credentials registered above.That way, you can validate that the agent can contact/authenticate successfully the Gerrit server.
I build a job in docker container,my pipeline like this, I can checkout the code from gerrit with ssh-key credenticials,but when i execute the git pull,an error was occured,the error message as follow next: **gerrit@xxxx: Permission denied (publickey). fatal: Could not read from remote repository.Please make sure you have the correct access rights and the repository exists.** Why is this an error?I hope i can get solutions,thanks
Jenkins git pull fail with correct ssh keys setup
productIdin thewishListis aString, while the_idfield inproductis anObjectId.You can change the type of theproductIdin thewishListor you can use$lookupwithletandpipelineto project the_idinproductas a String value before comparing it towishList.productId, like this:{ "$lookup": { "from": "products", "let": { "productId": "$wishList.productId" }, "pipeline": [ { "$addFields": { "idAsString": { "$toString": "$_id" } } }, { "$match": { "$expr": { "$eq": [ "$idAsString", "$$productId" ] } } } ] } }I haven't tested the query, since you didn't provide an easy way to reproduce the error, as mentioned by toyota Supra in the comment to your question, but it should be something along these lines.
WishlistproductMy codeGetting Wishlist Blanked . please help me why wishlist array shows no data
Lookup in mongodb not working when getting objects from array of objects
Conceptually, you don't need to have your labels/targets in the pipeline. Yes, you may need to apply LabelEncoder for y_train. But then imagine a situation when after training you want to do prediction.Also the sklearn pipeline is quite often used for hyper parameter tuning. Which also does not applicable to targets.Often this approach should be suitable for most cases:X = df.drop('income', axis=1) y = df[['income']] le = LabelEncoder() y = le.fit_transform(y)
Generally, Pipelines are used like this: pipe.fit(X_train, y_train). All transformer methds are fitted and transformed on X_train. Y train is only used for fitting the model. How I can construct a pipeline that transforms y_train? I have y which includes valyes ">=50k" and "<50k". I want to use LabelEncoder as transformer method.X = df.drop('income', axis=1) y = df[['income']] y_preprocessing = Pipeline([ ("labelencoder", LabelEncoder()) ]) preprocessing = ColumnTransformer([ ("y_preprocessing", y_preprocessing, ['income']) ])when usingy_preprocessing.fit(y)It gives a TypeError:TypeError: fit() takes 2 positional arguments but 3 were givenwhen usingpreprocessing.fit(y)It also gives a TypeError:TypeError: fit_transform() takes 2 positional arguments but 3 were given
How to create a Pipeline for preprocessing Y_train?
You are using thecmdbatch shell. As described inthe documentation, you must use%VARNAME%instead of$VARNAMEin yourscript:sections.script: - echo %CI_COMMIT_BRANCH%
I'd like to access one of the gitlab CI/CD pre-defined variables, but when I try to use it, I just get the literal string of my attempt to access it. PerGitlab's documentation, the correct syntax is"$CI_COMMIT_BRANCH", but this isn't working. I also tried creating my own variables, but the same thing is happening. Here are some screenshots of my .gitlab-ci.yml file and the results of it running:
Unable to access any variables in gitlab CI/CD pipeline
You can do something like below. I don't have WIndows to test. But this should work on Windows as well. Just provide the correct Path.pipeline { agent any parameters { choice(name: 'CHOICES', choices: listFiles("/var/jenkins_home/test"), description: '') } stages { stage('Test') { steps { echo "Run!!!" } } } } @NonCPS def listFiles(def path) { def files= [] new File(path).traverse(type: groovy.io.FileType.FILES) { file -> files.add(file) } return files }
I want to add to a pipeline parameters of folders and files in a directory on an agent. e.g when I click on Build with Parameters it will show my checkbox of all the folders&files at c:\project and then I can choose which files I want for the job. I try using plugin Active Choices Parameter, and to run a groovy scriptnode (node1){ stage('folders'){ bat "dir /b /s c:\\project" } }I've tried also with powershell scriptGet-ChildItem -path c:/project1 -Recurse -Name
Jenkins Parameterized with batch commands or powershell script
CI pipelines should be kept typically to build and push image to a container registry. Deployment should be done via principles like GitOps.Having said that it's no harm adding a deploy step. A deploy step may vary depending up on where you want to deploy:e.g. With an example applicationhttps://github.com/kameshsampath/drone-fruits-app-demoyou can deployA Docker Compose to run and test locally -->https://github.com/kameshsampath/drone-fruits-app-demo/blob/main/.drone.compose.ymlDeploy to Kubernetes -->https://github.com/kameshsampath/drone-fruits-app-demo/blob/main/.drone.kind.ymlDeploy to Cloud e.g. Google Cloud and vercel.com -->https://github.com/kameshsampath/drone-fruits-app-demo/blob/main/.drone.gcloud.ymlI hope that gives you some idea to craft your pipelines.
Expanding on what I'm trying to accomplish:After committing code to a locally-hosted Git server, a Drone pipeline is to test, then build, and finally deploy a React project on said server.Limitations:I have minimal React & pipeline experienceKeeping everything locally hosted (Gitea, Drone, etc)KISS, preferably a single pipeline building, deploying, and taking down a previous container once new code is pushedI've seen similar guides recommending Docker registries to push the container to, but I'm similarly falling short on implementation/guidanceWhere I'm falling short:Said React project is being built, but am unsure as to how to proceed to deploying.Current pipeline:# .drone.yml kind: pipeline type: docker name: example-build trigger: branch: - master event: - push steps: - name: build-static-files image: node:latest commands: - pwd - whoami - ls -al /drone - ls -al /drone/src - npm i socket.io-client @types/socket.io-client - chmod 777 -R ./node_modules/ - npm run build
Dockerizing a React project via Pipeline
As PyPi is part of the paid Artifactory subscription, you can consider using afree Artifactory cloud instance.It has full support of PyPi and other technologies, but is limited by storage and data transfer.It means you'll have to adjust your automation to reuse this instance and be aware of the limits.If you have to run it locally, you will probably have to buy a license...
I would like to spin a containerized Artifactory instance, to generate a PyPi repository and publish an artifact... all this in a CI pipeline.artirfact-oss container image doesn't support PyPi. Any other options?
Spinning Artifactory instance (container) during CI pipeline
You can simply do the following.steps { script { def time = new Date().getTime() echo "$time" } }
im working on Jenkins to automate my test runs in my project and as a task/stage where i need to fetch or use epoch timestamp in stage. Could you please help me with your suggestions how can i achieve it?Required Format: 1656166890987
How to get Epoch(unix) timestamp in Jenkins declarative pipeline code
Yes you can. It requires you to pass the steps-object of the pipeline though.class StepExecutor { def steps; public StepExecutor(def steps) { this.steps = steps } // some code void dir(String directory, Closure statement) { this.steps.dir(directory) { statement } } }creating the object from inside of your pipeline:pipeline { .... def stepExecutor = new StepExecutor(this); ...}
Below is my pipeline code :dir(my_directory) { retry(1) { // something } }Is there a possibility to accessdirstep in groovy through the pipeline context ?I'm thinking of something like this belowclass StepExecutor { // some code void dir(String directory, Closure statement) { this.steps.dir(directory) { statement } } }
How to replicate dir step in groovy
I think you need to do something like that:@Grab('org.yaml:snakeyaml:1.17') import org.yaml.snakeyaml.DumperOptions import org.yaml.snakeyaml.Yaml def options = new DumperOptions() options.setDefaultFlowStyle(DumperOptions.FlowStyle.BLOCK) Yaml yaml = new Yaml(options) // load existing structure def structure = yaml.load(new File("original.yml").text) // modify the structure structure.tenants.tenant_3 = [ state : 'all', web_token : true, cluster_pairs: ['cluster3'], datacenter : 'local', client : 'CLIENT_TEST' ] // save to the new file new File("modified.yml").write(yaml.dump(structure))So the steps are:load the data from file into the memorymodify the structure in a way you likestore the modified structure to the fileI hope it will help.
I have a yaml file(config.yaml) with tags/structure similar to what is mentioned below. I need to add a new tenant(tenant3) to the list of the existing tenants. How do I achieve it using the pipeline/groovy script? Any help/lead would be appreciated.consumer_services: - security - token id: 10000 tenants: tenant_1: state: all web_token: true cluster_pairs: - cluster1 datacenter: local client: CLIENT_TEST tenant_2: state: all web_token: true cluster_pairs: - cluster2 datacenter: local client: CLIENT_TEST base_network: subnets: - 10.160.10.10 - 10.179.1.09
How to append the new tag to the list of existing tag in the yaml file using groovy/pipeline script
Use theimblearn pipline:from imblearn.pipeline import Pipeline pipeline = Pipeline([('i', imputer),('over', SMOTE()),('m', model)])
i am trying to create a pipeline that first impute missing data , do oversampling with the SMOTE and the the modelmy code worked perfectly before i try smote not i cant find any solutionhere is the code without smotescoring = ['balanced_accuracy', 'f1_macro'] imputer = SimpleImputer(strategy='most_frequent') pipeline = Pipeline(steps=[('i', imputer),('m', model)]) # define model evaluation cv = RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=1) # evaluate model scores = cross_validate(pipeline, X, y, scoring=scoring, cv=cv, n_jobs=-1)And here's the code after adding smote Note: I tired importing make pipeline from imlearnimputer = SimpleImputer(strategy='most_frequent') pipeline = Pipeline(steps=[('i', imputer),('over', SMOTE()),('m', model)]) # define model evaluation cv = RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=1) # evaluate model scores = cross_validate(pipeline, X, y, scoring=scoring, cv=cv, n_jobs=-1)when i import Pipeline From SKLearn i got this errorAll intermediate steps should be transformers and implement fit and transform or be the string 'passthrough' 'SMOTE()' (type <class 'imblearn.over_sampling._smote.base.SMOTE'>) doesn'twhen i tried importing makepipeline from imlearn i get this errorLast step of Pipeline should implement fit or be the string 'passthrough'. '[('i', SimpleImputer(strategy='most_frequent')), ('over', SMOTE()), ('m', RandomForestClassifier())]' (type <class 'list'>) doesn't
Pipline with SMOTE and Imputer Errors
It would help to know exactly which terraform commands your pipeline jobs are running.Generally though, you should be able to comment out the TF code for your resource(s) and run the pipeline again to delete them.
My base .gitlab-ci.yml code is the followinginclude: - template: Terraform/Base.gitlab-ci.yml # https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/ci/templates/Terraform/Base.latest.gitlab-ci.yml stages: - init - validate - build - deploy - cleanup init: extends: .terraform:fmt validate: extends: .terraform:validate build: extends: .terraform:build deploy: extends: .terraform:deploy dependencies: - build allow_failure: true cleanup: extends: .terraform:destroy dependencies: - buildThe build stage failed but did create some resources in AWS. Then, cleanup successes with the following:Terraform has been successfully initialized!No changes. No objects need to be destroyed.Either you have not created any objects yet or the existing objects were already deleted outside of Terraform.Please keep it simple, I'm a newborn baby into this
How can I destroy my infra when pipeline fails on 'build' stage?
I don't thinkkillreads fromstdin, it only takes pids as arguments. So usekill $(...)instead, where the code to find the pids replaces the dots in the$(...)part.To find the pids:ps -A | grep smthn | grep -v grep | cut -d " " -f 1Here the firstgreplooks for smthn, the second grep filters out thegrepcommand that is looking for smthn and is still running at this time. Thencutpicks just the first field of the output, which is the pid.Put together:kill -4 $(ps -A | grep smthn | grep -v grep | cut -d " " -f 1)If you havepgrepandpkilland the process is always named the same (always smthn, not smthn1, smthn2, and so on), you can simplify this a lot.pgrepjust returns the pid, so you don't need to work withgrep -vandcut:kill -4 $(pgrep smthn)pkilljust sends the signal to whatever process executable you specify:pkill -4 smthn
I tried pipelines likeps -A | grep "smthn" | kill -4 (smthn's PID)So how can i grab multiple processes PID's from grep output? Likeps -A | grep "smthn", "smthn1", "smthn2" | kill -4 (smthn's PID)
How to do ps pipelines with linux?
In your catalog you can define filepath likehdfs://user@server:port/path/to/datahttps://kedro.readthedocs.io/en/stable/data/data_catalog.html#specifying-the-location-of-the-dataset
I'm trying to output of my Kedro pipeline to the HDFS file system. But I couldn't see on the internet how to do that and on Kedro documents. If anybody configured kedro in catalog please share a sample code how to do that.Also how to connect hdfs securely using credentialsI have the data in panda dataframe.How the entry for this catalog.yml looks like and where do I mention the credentials
How to write to HDFS using kedro