Response
stringlengths 8
2k
| Instruction
stringlengths 18
2k
| Prompt
stringlengths 14
160
|
---|---|---|
The delivery pipeline is capable of doing zero downtime deployments. There is an articleherethat describes how to do it. It is a little out dated, but it mostly holds.As for doing rollbacks, the pipeline probably does not handle them as well as active deploy would. You can redeploy an older build over a newer one to do the rollback, but that is about it.There is currently work be done to add some of the active deploy features to the delivery pipeline.
|
I am trying to understand the key differences of the deployment capabilities ofDelivery PipelineversusActive Deploy.From different documentations, I could understand thatActive Deploycan deploy with zero downtime and support rollback.I am curious to know the deployment capabilities ofDelivery Pipeline.
|
Bluemix: Does Delivery Pipeline support roll back of deployment
|
Content-type = application/octet-stream instead of plain/text solved my problem
|
I've developed a proxy service in OSB 12c with MTOM enabledI can send binary files successfully as pdf, doc, ppt or xls, but as soon as I send a text file the following error arises:REQUEST:<soap:Envelope xmlns:soap="http://www.w3.org/2003/05/soap-envelope" xmlns:upl="http://UploadFileService.acme/">
<soap:Header/>
<soap:Body>
<upl:scanDocRequest>
<requestMsg>
<doc>cid:719461305114</doc>
</requestMsg>
</upl:scanDocRequest>
</soap:Body>
</soap:Envelope>RESPONSE:<soap:Envelope xmlns:soap="http://www.w3.org/2003/05/soap-envelope">
<soap:Body>
<con:fault xmlns:con="http://www.bea.com/wli/sb/context">
<con:errorCode>OSB-382118</con:errorCode>
<con:reason>Decoding of MIME attachments from MIME Content-Transfer-Encoding='7bit' not supported</con:reason>
<con:location>
<con:node>AntivirusPipelinePairNode</con:node>
<con:pipeline>request-a00020f.N696e5fe3.0.15143454755.N8000</con:pipeline>
<con:stage>ReportingIn</con:stage>
<con:path>request-pipeline</con:path>
</con:location>
</con:fault>
</soap:Body>
</soap:Envelope>Response fails withDecoding of MIME attachments from MIME Content-Transfer-Encoding='7bit' not supported
|
Oracle Service Bus - Decoding of MIME attachments from MIME Content-Transfer-Encoding='7bit' not supported
|
You are looking for theasync.waterfallfunction.Alternatively you canapplyasyc.seqorasync.composewith multiple arguments if you need a function that you can pass an initialinputto.
|
I'm taking a look at theasynclibrary but I can't seem to find a control flow for handling pipelines. I'm just wondering if I'm missing something here.I want to implement a pipeline. Example:let pipeline = [];
pipeline.push((input, next) => { next(null, input); });
pipeline.push((input, next) => { next(null, input); });
var pipelineResult = pipelineRunner.run(pipeline, 'sample input', (error, result) => {});Explanation:A series of functions is called. Each function receives aninputand anextfunction. Each function processes theinputand passes it as a parameter to thenextfunction. As a result of the pipeline execution, I get the processedinput, or, if any function callsnextwith an error, the pipeline stops and the callback is called.I guess this is a pretty common use case so I thinkasynccan do it, but I'm not being able to find it. If you know of any other library that can achieve such result, that would be acceptable too.
|
Does the async library have any control flow for handling pipelines?
|
You can create and array ($Servers for example) then add each result into it:$Servers = @()
"OU=Domain Controllers,$mydomain", "OU=Server,OU=Berlin,$mydomain" |
# For each OU get Windows Server
ForEach { $Servers += Get-ADComputer -Filter { OperatingSystem -Like '*Windows Server*' } -Properties OperatingSystem -SearchBase $_ | Select -ExpandProperty Name}
|
I'd like to store the output of the command Select in a variable. Here's the original code:# OUs to search for servers
"OU=Domain Controllers,$mydomain", "OU=Server,OU=Berlin,$mydomain" |
# For each OU get Windows Server
ForEach { Get-ADComputer -Filter { OperatingSystem -Like '*Windows Server*' } -Properties OperatingSystem -SearchBase $_ } |
Select -Exp Name | Add-Content C:\serverfile.txtIn the last line I'd like to change Add-Content to a command that adds the output to a variable $Servers. However I can't get the syntax right. I tried:| Add-Content $Servers
| $Servers
"> $Servers"
$Servers += Select -Exp Name
|
Powershell Add Output of Select to Variable
|
This is easily done with recursion:repeat() {
count="$1"
shift
if [ "$count" -ge 1 ]
then
"$@" | repeat "$((count-1))" "$@"
else
cat
fi
}Examples:$ echo foo | repeat 0 sed 's/$/ 0/'
foo
$ echo foo | repeat 1 sed 's/$/ 0/'
foo 0
$ echo foo | repeat 3 sed 's/$/ 0/'
foo 0 0 0
|
I have a command that appends a computed column to stdout that I would like to apply a variableNnumber of times.For example if my input was 'hello\nworld\n' and I wanted to append a column of 0,N=3times I could type the following:echo -e 'hello\nworld' | sed 's/$/ 0/' | sed 's/$/ 0/' | sed 's/$/ 0/'I've been trying stupid ideas like:echo -e 'hello\nworld' | (for i in $(seq 1 $N); do echo $(cat) 0; done)andecho -e 'hello\nworld' | (for i in $(seq 1 $N); do sed 's/$/ 0/'; done)but clearly these are not chaining the pipeline.Any ideas?
|
Run script on pipeline output variable number of times
|
You can usesortwith the 7th column:$ sort -k7 -n file
-rw-r----- 1 matias matias 892743 sep 3 08:36 aaa.txt
-rw-r----- 1 matias matias 67843408 sep 11 08:55 file1
-rw-r----- 1 matias matias 892743 ago 18 08:09 qwe
-rw-r----- 1 matias matias 1952 oct 23 12:05 file2
-rw-r----- 1 matias matias 965 oct 23 10:14 asd.txtFromman sort:-n, --numeric-sort
compare according to string numerical value
-k, --key=KEYDEF
sort via a key; KEYDEF gives location and typeHowever, this is quite fragile and, in general,you should not parse the output ofls.
|
I want to order some command output (using pipeline), taking into account some field from the output.For example, if I runlcommand, I have:-rw-r----- 1 matias matias 67843408 sep 11 08:55 file1
-rw-r----- 1 matias matias 1952 oct 23 12:05 file2
-rw-r----- 1 matias matias 965 oct 23 10:14 asd.txt
-rw-r----- 1 matias matias 892743 sep 3 08:36 aaa.txt
-rw-r----- 1 matias matias 892743 ago 18 08:09 qweI want to order this output according for example by the day of the month field, so the output should be:-rw-r----- 1 matias matias 892743 sep 3 08:36 aaa.txt
-rw-r----- 1 matias matias 67843408 sep 11 08:55 file1
-rw-r----- 1 matias matias 892743 ago 18 08:09 qwe
-rw-r----- 1 matias matias 1952 oct 23 12:05 file2
-rw-r----- 1 matias matias 965 oct 23 10:14 asd.txtHow can I do this? I usually usegrep,cat,l,ls,ll, but I can't figure out how to achieve this.
|
Order command output using line field
|
If you replace the:in the output with=, you can pipe it toConvertFrom-StringDataand get a nice hashtable instead:$values = testCommand
$ValueTable = $values -replace ": ","=" |ConvertFrom-StringData
$ValueTable["a"] # this will return the value "1"
|
I have data coming from another command outputtestCommandwill print something like thisa: 1
b: test
c: an3I want to grep value of the specific propertytestCommand | findstr 'a', which printsa: 1.But I want to extract the value1. Couldn't figure out the way! If it doesn't exist print default valuedefault
|
Get Value from Power Shell Pipeline
|
The classic example for pipeline that I have encountered is an assembly line!
So say if you had A-B-C-D-E as the five stages to completion, and each of them involving different resources;
When E is being worked upon the resources that can do A-D are idle and that's what pipelining exploits in order to get some form of concurrency.So your timeline would look something like - and say you had 1000 work items each of which did the above
1. Without pipelining you would basically repeat A-B-C-D-E a 1000 times, leading to time slot of 5000!
2. With pipeline you would get A-B-C-D-E for the first round, and then every other time tick, you are going to get an output i.eA-B-C-D-E // ends at t=5
__A-B-C-D-E // ends at t=6!
____A-B-C-D-E // ends at t=7!
.....
......
............
A-B-C-D-E // 1000th process ends at t=1005 (starts at t=1000)I guess you could extend that for the 10 stage pipelining case as well.
|
I'm having a lot of trouble understanding calculating the speedup of Pipelining. This is the slide that was provided for this in my Computer Organization class.I don't really understand the formula.Why is it 2n?
Why is it 0.5n + 1.5?My professor started to explain the 1.5 comes from 1.5 hours between when A starts and B starts...and thats easy to see with a picture like that but a problem like...Suppose you have a pipelined machine with a 10 stage pipeline and a
program with 1000 instructions whose dependencies are such that the
pipeline does not stall. If each stage of the pipe takes 1 cycle, what
is the speedup gained by pipelining compared to execution of the
program on the same machine without exploiting the pipeline?I don't think I could just draw a picture...plus I know there has to be some better way than drawing a picture.Can anyone explain speedup with pipeling and/or provide some good material online to understand this?
|
Understanding the speedup equation with Pipelining
|
You version Custom Pipeline Components by setting the Assembly Version Number in the Project the component is in.When you GAC it and add it to your toolbox in Visual Studio, it will read that Assembly Version Number. So any Pipeline you add it to will be referencing that Version. There is no need to set it manually.
|
!!This is not about custom pipeline!!I want to know how I can manually specify version for the pipelines that we define using the already provided pipeline components in visual studio?. I am new to BizTalk, and even don't know if we can manually specify version for the pipelines or not. We can specify version for a schema, by specifying the version in version property of a schemas's property. But there is no version property in a Pipeline's property.Also, I want to know if we can specify version number for a pipeline component, if yes, then how?Screenshots attached for better understanding.
|
How to version Pipelines in BizTalk Application?
|
Let us say that after an instruction enters the pipeline, it will take itxstages after which any register write by that instruction will be visible to any following instruction.Then you have to take care of the RAW dependencies among every set ofxconsecutive instructions. In the worst case you can takexto be the max no. of stages in the pipeline.Now, the case in the question looks like a HW problem and since the pipeline structure is not defined so you will have to look at the RAW dependencies over all the instructions, which in this case are:I2 and I1 over R1I3 and I1 over R1I4 and I2 over R3I4 and I3 over R4
|
I am confused in finding RAW dependencies whether we have to find only in adjacent instructions or non-adjacent also.consider the following assembly codeI1: ADD R1 , R2, R2;
I2: ADD R3, R2, R1;
I3: SUB R4, R1 , R5;
I4: ADD R3, R3, R4;FIND THE NUMBER OF READ AFTER WRITE(RAW) DEPENDENCIES IN THE Above Code.
assume ADD x,y,z = x <- y + zI am getting 2 dependency I2-I1 and I4-I3.
|
Read After Write(RAW) HAZARD
|
Usually you either forwardorplace a bubble, unless you do not have the datum required to forward in which case you may need to stall anyways.
In your example, suppose your pipeline allows forwarding from MEM stage to EX stage.
You can do a time diagram showing in which stage is each instruction.Without forwarding:time 1 2 3 4 5 6 7 8
lw $5, 8($5) IF ID EX MEM WB
sw $5, 12($6) IF ID-------> EX MEM WByou have to stall two cycles for$5to become available to theEXstage of the second instruction.With forwarding:time 1 2 3 4 5 6 7
lw $5, 8($5) IF ID EX MEM WB
sw $5, 12($6) IF ID---> EX MEM WBIn this case the data read from memory is available to be forwarded theEXstage ofswinstruction.
|
Consider the following MIPS instructions:lw $5, 8($5)sw $5, 12($6)Now, as far as I understand,Memory[8 + $5]is available to us at the begining of stage #5, whereasswneeds$5information at stage #4 so we can just forward. Why do we also need a bubble/stall?
|
Why a bubble is needed?
|
The pipeline callstransformon the preprocessing and feature selection steps if you callpl.predict.
That means that the features selected in training will be selected from the test data (the only thing that makes sense here).It is unclear what you mean by "apply" here. Nothing new will be learned when calling "predict", but all steps will be used with "transform".
|
Apologies if this is obvious but I couldn't find a clear answer to this:Say I've used a pretty typical pipeline:feat_sel = RandomizedLogisticRegression()
clf = RandomForestClassifier()
pl = Pipeline([ ('preprocessing', preprocessing.StandardScaler()),
('feature_selection', feat_sel),
('classification', clf)])
pl.fit(X,y)Now when I apply pl on a new set,pl.predict(X_classify);is RandomizedLogisticRegression going to be reapplied or are the columns that were selected in training going to be used in the new data? If not is there a way for pipeline to differentiate between feature selectors and feature extractors/scalers/other transforms that should be applied on the new input? Until I'm sure, I'm skipping the pipeline feature and just doing each step manually and maintaning state.Thanks!
|
In sklearn, does a fitted pipeline reapply every transform?
|
You can open a new view in the same layout (http://www.paraview.org/Wiki/Beginning_GUI#Split_windows) , and when you make a video you will record both because they are in the same layout.
Still, they will not be synchronized (to change the visibility of an object, you will have to do it in each view), maybe for automating this the python tracing can help.
|
I am visualizing multiple steps of a simulation and exporting them as a movie. Since the geometry is complicated, I would like to show it from several perspectives side-by-side, with the same pipeline.Paraview has "Render View (Comparative)" which lets me do just that, but it resets the pipeline for each new comparative view. Since the pipeline is complex, I have to set it up manually for each new view, which is tedious, and changes to one don't change the other.Am I overlooking a simple way to just show exactly the same thing, just from different point, in the comparative render view?
|
Comparative view in Paraview with identical Pipeline
|
At first, I attempted thinkjson's CloudStorageLineInputReader but had no success.Then I foundthis pull request...which led me torbruyere's fork. Despite some linting issues (like the spelling onGoolgeCloudStorageLineInputReader), however at the bottom of the pull request it is mentioned that it works fine, and asks if the project needs to be taken over.Hope that helps!
|
This is a python appengine question, mapreduce library 1.9.21 .I have code writing lines to a blob in the local blobstore, then processing that using mapreduce BlobstoreLineInputReader.Given that the files api is going away, I thought I'd retarget all my processing to cloud storage.I would expect to find a class called GoogleCloudStorageLineInputReader, but there isn't anything like that. Is it hiding somewhere?Is there something way I can use GoogleCloudStorageInputReader to read lines?Another possibility is using GoogleCloudStorageRecordInputReader, but for that my input file needs to be in LevelDB format and I don't know how to create that except with a GoogleCloudStorageConsistentRecordOutputWriter, which I don't know how to use outside a mapreduce context. How might I do that?Or am I doing this all wrong, is there some other possibility I've missed?
|
What is the equivalent of BlobstoreLineInputReader for targeting Google Cloud Storage?
|
You have two major options:Use the Execute shell or Execute Windows batch command build stepsuse a java based tool like liquibase, ant tasks, maven plugin or many more.
|
I'm doing an automatic deployment to move binaries,sql scripts,properties files from development server to staging server. Note, my case property and xsd files were present in the Hard Drive on the computer instead of Tomcat web server.Jenkins has the ability to deploy applications on tomcat with the help of SVN.How Jenkins will execute sql scripts and apply property files changes on remote server?
|
Automatic deployment from development server to Remote server
|
the problem is fixed. when I used the pip install mysql-python, it installed a 32bits version of mysql-python, I uninstalled it and download a 64bits version.
|
When I wrote the Scrapy\pipeline and then I tried to use scrapy crawl dmoz,
an error occured:File "F:\Python\lib\site-packages\scrapy\utils\misc.py", line 42, in load_object
raise ImportError("Error loading object '%s': %s" % (path, e))
ImportError: Error loading object 'tutorial.pipelines.Tutorialpipeline': DLL load failed: %1 is not a valid Win32 application.UPDATEthe problem is fixed. when I used the pip install mysql-python, it installed a 32bits version of mysql-python, I uninstalled it and download a 64bits version.however, I met another problem, when I run the spider, it shows that:_mysql_exceptions.ProgrammingError: (1064,"You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'desc,pic)values ('xe4\xbe\x9b\xe5\xba.......' at line 1")I am not sure what happened,could anybody help me figure it out?here is how I write the mysql insert function in pipelinedef _conditional_insert(self,tx,item):
tx.execute('insert into raw(title,area,date,sclass,link,desc,pic) values (%s,%s,%s,%s,%s,%s,%s) '
(item['title'],item['area'],item['date'],item['sclass'],item['link'],item['desc'],item['pic']))
|
Scrapy pipeline mysql error
|
Assuming:you have multiple interleaved sessionsyou have some kind of a sessionid to identify and correlate separate eventsyou're free to implement consumer logicabsolute ordering of merged events are not importantwouldn't it then be possible to use separate topics with the same number of partitions for the three kinds of events and have the consumer merge those into a single event during the flush to S3?As long as you have more than one total partition you would then have to make sure to use the same partition key for the different event types (e.g. modhash sessionid) and they would end up in the same (per topic corresponding) partitions. They could then be merged using a simple consumer which would read the three topics from one partition at a time. Kafka guarantees ordering within partitions but not between partitions.Big warning for the edge case where a broker goes down between page request and page reload though.
|
I am using Kafka as a pipeline to store analytics data before it gets flushed to S3 and ultimately to Redshift. I am thinking about the best architecture to store data in Kafka, so that it can easily be flushed to a data warehouse.The issue is that I get data from three separate page events:When the page is requested.When the page is loadedWhen the page is unloadedThese events fire at different times (all usually within a few seconds of each other, but up to minutes/hours away from each other).I want to eventually store a single event about a web page view in my data warehouse. For example, a single log entry as follows:pageid=abcd-123456-abcde, site='yahoo.com' created='2015-03-09 15:15:15' loaded='2015-03-09 15:15:17' unloaded='2015-03-09 15:23:09'How should I partition Kafka so that this can happen? I am struggling to find a partition scheme in Kafka that does not need a process using a data store like Redis to temporarily store data while merging the CREATE (initial page view) and UPDATE (subsequent load/unload events).
|
Updating Kafka Event Log
|
I'm not sure this is the best method, but at least it resembles powershell idioms throughout:Function Get-Amount{
[CmdletBinding()]
Param(
[Parameter(ValueFromPipeline=$true)]$t,
[Parameter(position=1)]$r,
[Parameter(position=2)]$P)
PROCESS{$P*[math]::Pow(1+$r,$t)}
}
Function Get-Result{
[CmdletBinding()]
Param(
[Parameter(ValueFromPipeline=$true)]$x,
[Parameter(Position=1)]$Cmdlet, #positional arguments here makes calling more readable
[Alias('args')] #careful, $args is a special variable
[Parameter(Position=2)]$Arguments=@()) #the default empty array is required for delegate style 1
PROCESS{
#invoke style 1 #works with delegate styles 1,2,3
iex "$x | $Cmdlet @Arguments"
#invoke style 2 #works with delegate styles 2,3
$x | . $Cmdlet @Arguments
}}
5,20 | Get-Result 'Get-Amount -r 0.05 -P 100' #delegate style 1
5,20 | Get-Result Get-Amount 0.05,100 #delegate style 2
5,20 | Get-Result Get-Amount @{r=0.05;P=100} #delegate style 3Which results in:127.62815625
CommandNotFoundException
265.329770514442
CommandNotFoundException
127.62815625
127.62815625
265.329770514442
265.329770514442
127.62815625
127.62815625
265.329770514442
265.329770514442
|
Building onthis technique to use Cmdlets as "delegates"I am left with this question:Is there a way to pass a commandlet with prescribed named or positional parameters to another commandlet that uses the powershell pipeline to bind the remaining parameters to the passed commandlet?Here is the code snippet I'd like to be able to run:Function Get-Pow{
[CmdletBinding()]
Param([Parameter(ValueFromPipeline=$true)]$base,$exp)
PROCESS{[math]::Pow($base,$exp)}
}
Function Get-Result{
[CmdletBinding()]
Param([Parameter(ValueFromPipeline=$true)]$x,$Cmdlet)
$x | . $Cmdlet
}
10 | Get-Result -Cmdlet 'Get-Pow -exp 2'
10 | Get-Result -Cmdlet 'Get-Pow -exp 3'
10 | Get-Result -Cmdlet Get-Pow -exp 2
10 | Get-Result -Cmdlet Get-Pow -exp 3The first two calls toGet-Resultresult inCommandNotFoundExceptionbecauseGet-Pow -exp 2"is not the name of a cmdlet."The last two calls toGet-Resultresult inNamedParameterNotFound,Get-Resultsince that syntax is actually attempting to pass parameter-exptoGet-Resultwhich it does not have.Is there some other way to set this up so it works?
|
Is there a way pass a Cmdlet with some parameters to another Cmdlet that pipes the remaining parameters to it?
|
STDIN_FILENOis aconstant defined inunistd.h:The following symbolic constants are defined for file streams:STDIN_FILENOFile number ofstdin. It is 0.STDOUT_FILENOFile number ofstdout. It is 1.STDERR_FILENOFile number ofstderr. It is 2.As it is a constant, youcan'treassign them.
|
Documentation of dup says that return value in new file descriptor or on error -1.I'm getting this error, and I really don't know why:mav@mav-MS-7592:~/FRI/OSIZPIZ$ gcc pipe.c -o pipe
pipe.c: In function ‘main’:
pipe.c:26:16: error: lvalue required as left operand of assignment
STDIN_FILENO = dup(fd[0]);Here is my code:int main(int argc, char* argv[]){
//fd[0] - reading
//fd[1] - writing
int fd[2];
pid_t childpid;
if(pipe(fd) == -1) errexit("pipe");
//child 0
//parent PID
if((childpid = fork()) == -1) errexit("fork");
if(childpid == 0){
close(fd[1]);
STDIN_FILENO = dup(fd[0]);
}else{
close(fd[0]);
STDOUT_FILENO = dup(fd[1]);
}
return 0;
}I know, I could avoid this with dup2(fd[0], STDIN_FILENO); but I wanna use just dup...Thanks in advance!
|
dup error: lvalue required as left operand of assignment
|
After talking to some people, I realized I was overthinking this. The solution is simpler than I was imagining. To pipe through commands, just do:public function whateverMethod(Dispatcher $dispacher) {
$dispached->pipeThrough([]) // arrays with commands
}Dispacher comes through beautiful laravel 5 method injection!
|
Is it possible to create a sort of command pipeline when dispatching a command?For instance:$this->dispatch(new UploadFileCommand)would callValidateFileCommand,WhateverCommand.It's likepipeThroughbut triggers specific commands.
|
Laravel 5 Commands - dispatch command pipeline
|
The error is happening while you are making a SELECT query. There is a single placeholder in the query, butitem['title']is a list of strings - it has multiple values:self.cursor.execute("SELECT title, url FROM items WHERE title= %s", item['title'])The root problem is actuallycoming from the spider. Instead of having a single item being returned with multiple links and titles - you need to return a separate item for every link and title.Here is the code of the spider that should work for you:import scrapy
from scrapycrawler.items import DmozItem
class DmozSpider(scrapy.Spider):
name = "dmoz"
allowed_domains = ["snipplr.com"]
def start_requests(self):
for i in range(1, 146):
yield self.make_requests_from_url("https://snipt.net/public/?page=%d" % i)
def parse(self, response):
for sel in response.xpath('//article/div[2]/div/header/h1/a'):
item = DmozItem()
item['title'] = sel.xpath('text()').extract()
item['link'] = sel.xpath('@href').extract()
yield item
|
The problem i am facing is that Scrapy code, specifically pipeline presents me with a Programming errormysql.connector.errors.ProgrammingError: Not all parameters were used in the SQL statement'This is my code for the pipeline:import csv
from scrapy.exceptions import DropItem
from scrapy import log
import sys
import mysql.connector
class CsvWriterPipeline(object):
def __init__(self):
self.connection = mysql.connector.connect(host='localhost', user='test', password='test', db='test')
self.cursor = self.connection.cursor()
def process_item(self, item, spider):
self.cursor.execute("SELECT title, url FROM items WHERE title= %s", item['title'])
result = self.cursor.fetchone()
if result:
log.msg("Item already in database: %s" % item, level=log.DEBUG)
else:
self.cursor.execute(
"INSERT INTO items (title, url) VALUES (%s, %s)",
(item['title'][0], item['link'][0]))
self.connection.commit()
log.msg("Item stored : " % item, level=log.DEBUG)
return item
def handle_error(self, e):
log.err(e)It gives me this exact error when i run the spider.http://hastebin.com/xakotugaha.pyAs u can see, it clearly crawls so i doubt anything wrong with the spider.I am currently using Scrapy web crawler with MySql database. Thanks for your help.
|
Scrapy ProgrammingError: Not all parameters were used in the SQL statement
|
You're not quoting the|in the second line. Also, try to avoid backticks to escape newlines. I've rewritten it to use subexpressions$()."$($user.name)|$($user.DisplayName)" | Out-File $output -Append
"Managing Group: | $($_.name)| Group Description: $($_.description -replace "`r`n", " ")" | Out-File $output -Append
|
Closed.This question isnot reproducible or was caused by typos. It is not currently accepting answers.This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may beon-topichere, this one was resolved in a way less likely to help future readers.Closed9 years ago.Improve this questionThe following code line is giving me a pipeline error:$user.name + "|" + $user.DisplayName | Out-File $output -Append
"Managing Group: " | + $_.name + "|" +`
" Group Description: " + ($_.description -replace "`r`n", " ") | Out-File $output -AppendI have tried replacing the part of code with:(& { process{$_.description -replace "`r`n", " "}} )
(foeach-object{$_.description -replace "`r`n", " "} )
%{$_.description -replace "`r`n", " "}But nothing seems to fix the error.Error:Expressions are only allowed as the first element of a pipeline. At
C:\AD-User.MonitorAll.ps1:92 char:82
+ " Group Description: " + ($_.description -replace "rn", " ") <<<< | Out-File $output -Append
+ CategoryInfo : ParserError: (:) [], ParentContainsErrorRecordException
+ FullyQualifiedErrorId : ExpressionsMustBeFirstInPipelineUPDATE:my issue was I was missing the " " over the | after "Managing Group: "$user.name + "|" + $user.DisplayName | Out-File $output -Append
"Managing Group: " + "|" + $_.name + "|" +`
" Group Description: " + ($_.description -replace "`r`n", " ") | Out-File $output -Append
|
Error with Out-File: Expressions are only allowed as the first element of a pipeline [closed]
|
By calling this:dup2(pipefd[1], 1);
close(pipefd[1]);in the child process you are closing the already closedpipefd[1], soclose(pipefd[1]);has no effect. You should also closepipefd[0]in the child process. Same applies to the parent process. So, your code should be edited as below:int pipefd[2];
pipe(pipefd);
int id = fork();
if(id == 0)
{
dup2(pipefd[1], 1);
close(pipefd[0]);
execvp("ls", (char*[]){"ls", "-l", NULL});
}
else
{
dup2(pipefd[0], 0);
close(pipefd[1]);
execvp("tail", (char*[]){"tail", "-n", "2", NULL});
waitpid(id, NULL, 0);
}
|
For some reason I can't get it right, I want to call "ls -l" and "tail -n 2" through a pipeline so the last two files in the list of files would show.
Here is the code:int pipefd[2];
pipe(pipefd);
int id = fork();
if(id == 0)
{
dup2(pipefd[1], 1);
close(pipefd[1]);
execvp("ls", (char*[]){"ls", "-l", NULL});
}
else
{
dup2(pipefd[0], 0);
execvp("tail", (char*[]){"tail", "-n", "2", NULL});
waitpid(id, NULL, 0);
close(pipefd[0]);
}
return 0;What is the problem in the following code ? I feel like I have a misunderstanding here, I also searched a lot and no answer was found on the internet...
|
Using a simple pipeline unix
|
Just define abashfunctionfor your purpose (perhaps in your~/.bashrcif you want to make it permanent), e.g.editcat() {
cat $* > output
gedit output
}You might want to make that function more fancy, by generating the output file as a temporary file usingmktemp(1)editcat() {
local editemp=$(mktemp)
cat $* > $editemp
gedit $editemp
rm $editemp
}But you might consider just usingless(1)
|
I want to achieve something quite simple with pipeline:step 1 : cat input1 input2 > outputstep 2 : gedit outputI can docat input1 input2 > output | gedit outputBut I wonder how can I ommit typing the name of output file in this case? So the file created from cat redirect should be the file gedit open.
Thanks!
|
Use pipe line to open a file after it's just created
|
I'm pretty sure you just need to use AddParameter or AddArgument after adding theGet-ClusterResourcecommand to the pipeline.AddParameter on MSDN.Once you have the first pipeline added (only a single command in this case), usevar result = ps.Invoke();, yank the required info from the result.Members collection, and use it to AddParameter or AddArgument after adding theGet-ClusterGroupThen continue to useaddCommandto fill in the rest of the pipeline.The Powershell Invoke method has an example onmsdn(copy and pasted for posterity):// Using the PowerShell object, call the Create() method
// to create an empty pipeline, and then call the methods
// needed to add the commands to the pipeline. Commands
// parameters, and arguments are added in the order that the
// methods are called.
PowerShell ps = PowerShell.Create();
ps.AddCommand("Get-Process");
ps.AddArgument("wmi*");
ps.AddCommand("Sort-Object");
ps.AddParameter("descending");
ps.AddArgument("id");
Console.WriteLine("Process Id");
Console.WriteLine("------------------------");
// Call the Invoke() method to run the commands of
// the pipeline synchronously.
foreach (PSObject result in ps.Invoke())
{
Console.WriteLine("{0,-20}{1}",
result.Members["ProcessName"].Value,
result.Members["Id"].Value);
} // End foreach.
|
I want to examine a MS service (with display name 'MyService', say) on a failover cluster and to this end I want to evaluate powershell commands in C#.
The commands I have in mind are$a = Get-ClusterResource "MyService"
$b = Get-ClusterGroup $a.OwnerGroup.Name | Get-ClusterResource | Where-Object {$_.ResourceType -eq "Network Name"}I already figured out how to load the FailoverClusters module in to the power shell instance. I'm creating the shell using
the following code:InitialSessionState state = InitialSessionState.CreateDefault();
state.ImportPSModule(new[] { "FailoverClusters" });
PowerShell ps = PowerShell.Create(state);With thispsinstance I can now successfully execute single cluster evaluation commands.Now my understanding is that if I'm usingps.AddCommandtwice, first withGet-ClusterResourceand then with the commands from the next line, I will pipe the result ofGet-ClusterResourceinto the next command, which I don't want to do since the-Nameparameter ofGet-ClusterResourcedoes not accept results from a pipe. (Rather the second line would be build usingAddCommand)My question is, how do I pass the variable$ato the second line in a c# powershell invoke? Do I have to create two power shell instances and evaluate the first line first, passing it's result somehow to a second call, or is it possible to define a variable in a programmatic powershell instance?
|
Using variable in c# invocation of powershell
|
If you want to change bootstrap vars and use variables/mixins across the files you should create a file calledbootstrap_and_overrides.lessthere you import all your other files and bootstrap in correct order.This single file should be required in application.css (plus any other files, like query-ui.css, etc). You should not include bootstrap files in application.css though.You could also just itapplication.css.lessbut I had some problems with that approach.
|
I know that the issue was talked about few times but I still can't find one proper way.I use less and I have bootstrap.less (no gem), global.less, variables.less, login.less. All in app/assests/stylesheet directory.My application.less looks like this:*= require 'bootstrap'
*= require 'global'
*= require 'login'and including:<%= stylesheet_link_tag "application", media: "all", "data-turbolinks-track" => true %>I was trying to use:rake assets:clean assets:precompileI was trying to @import in many waysIssues I have:
login.less and global.less don't see variables until I @import it.
I can't override bootstrap directives without !important and it confuses me so much. Even if I import them in correct order to my application.less it still generates one big application.css where bootstrap is at the end. I don;t get it.I want to have my less files in correct order to override it as I want. It will be good if it compiles to one file to make browser load website faster (I though thatrequiredoes it but it doesn't look so)Please advice what should I do. I lost my day on this issue.
|
Rails assets pipeline with bootstrap
|
Parse.Launch does not use the command line indirectly, it just behaves like gst-launch. You can also create an element using the ElementFactory and pass it the parameters like this:var playbin = ElementFactory.Make("playbin", "my-playbin");
playbin["uri"] = "file:///a:/test.avi";
|
I use gstreamer-sharp as follows:var pipeDescription = "playbin uri=file:///a:/test.avi ";
var pipeline = Gst.Parse.Launch(pipeDescription) as Gst.Bin;As far as I understand, it starts gstreamer's launcher and gives parameters to gstreamer. It is the same how I will launch gstreamer from the command line.Is this the only choice to work with gstreamer? Can I use these functions, as in other libraries(function();), without indirectly using the command line? Is this possible for cross platform use if I usegst-launch.exe?
|
Direct use of gstreamer-sharp (without command line)
|
If you're running GATE through the Gate.init(), then you can easily load two Controller objects:CorpusController pipeline1 = (CorpusController) PersistenceManager.loadObjectFromFile(new File("savedState.xgapp"));
CorpusController pipeline2 = (CorpusController) PersistenceManager.loadObjectFromFile(new File("another.xgapp"));
Corpus corpus = Factory.newCorpus("web corpus");
pipeline1.setCorpus(corpus);
pipeline2.setCorpus(corpus); // I don't see why, but you may need two different corporaThen you can execute any of them depending on your logic:Document doc = Factory.newDocument("Text from my web form");
corpus.add(doc);
// if some condition
pipeline1.execute();
// remember to clean up resources:
corpus.clear();
Factory.deleteResource(doc);However, if you're doing a web application I would recommend readingthis whole chapterand using what's most convenient in your case.I personally prefer the Spring application, following the example in module 8 fromthe GATE training materials. If you're familiar with Spring it should be easy for you to configure two different pipelines to use in your services.
|
I want to start GATE from an external system managed by a U/I. I'm not in charge of the U/I development. I need to know if GATE can be started/initialized externally with TWO PIPELINES. Can this be done? And if so, how?I suppose using the "Gate.init();" command to initialize/start GATE, but then how do I start two separate pipelines?Thanks in advance.
|
Starting GATE with two pipelines
|
If you pass the output of the<COMMAND>to a loop, you could evaluate it one line at a time:<COMMAND> | while read text; do
ipaddr=`echo $text | grep -oE '((1?[0-9][0-9]?|2[0-4][0-9]|25[0-5])\.){3}(1?[0-9][0-9]?|2[0-4][0-9]|25[0-5])'`
if [ $? -eq 0 ]; then
(echo "exit" | nc $ipaddr 23 -w 5
if [ $? -eq 0 ]; then
(
<SomeCommandsHERE>
) | nc $ipaddr 23 1>>$file 2>&1
fi
)
fi
done
|
I want to know how I can usestdoutfrom piped command and then use it in nc connection:<COMMAND> | \
grep -oE '((1?[0-9][0-9]?|2[0-4][0-9]|25[0-5])\.){3}(1?[0-9][0-9]?|2[0-4][0-9]|25[0-5])' | \
(echo "exit" | nc <IP-HERE> 23 -w 5 \
if [ "$?" -eq "0" ]; then
(
<SomeCommandsHERE>
) | nc <IP-HERE> 23 1>>$file 2>&1 )Questions:1) How can I use result ofgrep commandfor mynccommand in this thread?2) Can I say the result ofgrepthat is aniponly be used in whole following statement like what I did here?(echo "exit" | nc <IP-HERE> 23 -w 5 \
if [ "$?" -eq "0" ]; then
(
<SomeCommandsHERE>
) | nc <IP-HERE> 23 1>>$file 2>&1 )UPDATEWhat I tried so far:<COMMAND> | \
grep -oE '((1?[0-9][0-9]?|2[0-4][0-9]|25[0-5])\.){3}(1?[0-9][0-9]?|2[0-4][0-9]|25[0-5])' | \
while read ip; do
( echo "exit" | nc "$ip" 23 -w 5
[[ "$?" -eq "0" ]]
(
echo "hello"
) | nc "$ip" 23 1>>$file 2>&1
); doneIs this correct? How can I change theifstatement if it is not corrrect?
|
Using stdout from piped command in nc connection and commands in parentheses
|
SW R16, -100(R6) --> possible RAW on R6 and/or R16
LW R4, 8(R16) --> none: R16 was read in the previous instruction,
so it can be read safely here
ADD R5, R4, R4 --> RAW on R4
|
I need to determine the dependency types present in the following block of instructions. Unfortunately, the book I'm using is extremely unclear as to how to go about this. This is what I came up with:SW R16, -100(R6) --> RAW on R16
LW R4, 8(R16) --> WAR on R16
ADD R5, R4, R4 --> RAW on R4Am I on the right track? Can the first instruction have a Read-After-Write dependency type even though it is the first instruction in the pipe?
|
Dependency Types - MIPS/Pipelining
|
Loose the space between "image" and "freeze" and you should be good.
|
I have started working on gstreamer. I had a warning in the terminal command below; can you help me figure it out?P. S. I had installed gstreamer successfully.oddspin@oddspinl1:~$ gst-launch filesrc location=foto.jpg ! jpegdec ! image freeze ! mfw_isink
WARNING: erroneous pipeline: no element "image"
|
WARNING: erroneous pipeline: no element "image"
|
I can't see that your code works properly; even though you don't say how it's supposed to work (why is$checktaken from args[1] instead of args[0]?).YourSelect-Stringline is getting the matching lines, then doing some selecting which throws away the line data you want, but doesn't seem to be necessary.I've reworked it as:$check = $args[0]
$totalMatches = 0
foreach ( $file in $args[1..$args.Length] )
{
if ( Test-Path $file ) {
$matches = Select-String $check $file -AllMatches -SimpleMatch
Write-Output "There were $($matches.Count) Matches in $file" | Tee-Object -FilePath "output.txt" -Append
foreach ($match in $matches) {
Write-Output $match.Line | Tee-Object -FilePath "output.txt" -Append
}
Write-Host
$totalMatches = $totalMatches + $matches.Count
}
else {
Write-Output "File $file does not exist" | Tee-Object -FilePath "output.txt" -Append
}
}
echo "Total Matches Found: $totalMatches"Changes:Take the $check as the first argumentIterate over the arguments directly instead of counting through themAdded -SimpleMatch so it doesn't work with regexes, since you didn't mention themRemoved theselect-object -expandbits, just grab the select-string resultsLoop through the results and get the line from$match.lineAddedTee-Objectwhich both writes to screen and to file in one line
|
$check = $args[1]
$numArgs = $($args.count)
$totMatch = 0
#reset variables for counting
for ( $i = 2; $i -lt $numArgs; $i++ )
{
$file = $args[$i]
if ( Test-Path $file ) {
#echo "The input file was named $file"
$match = @(Select-String $check $file -AllMatches | Select -Expand Matches | Select -Expand Value).count
echo "There were $match Matches in $file"
echo "There were $match Matches in $file" >> Output.txt
$totMatch = $totMatch + $match
}
else {
echo "File $file does not exist"
echo "File $file does not exist" >> Output.txt
}
}
echo "Total Matches Found: $totMatch"Esentially i created a quick app to find the word searched and check the instances in the file, would anyone know how to edit this to send the whole Line that the word was found in to the Ouput.txt file, So rather on top of instances add the whole line itself? Thanks in advance
|
Printing the whole line Powershell on word finder
|
You can't access the$_in a string like that but you can inside a scriptblock if the scriptblock is used as the argument for a parameter that accepts pipeline input e.g.:for ($i = 0; $i -le 3; $i++) {
Get-ChildItem $somepath | Copy-Item -Destination {"c:\somefolder\$i-$($_.Name)"}
}
|
I usefor ($i = 0; $i -le 3; $i++) {Get-ChildItem -path
$somepath|Copy-Item -Destination "c:\somefolder\$i-${$_.Name}"}But the destination path never translated from variables to right letters. There what I mean:
assume $i = 2 and $_.name = file.exe"c:\somefolder\$i-${$.Name}" going to c:\somefolder\2-""c:\somefolder\$i-$($.Name)" going to c:\somefolder\2-""c:\somefolder\${$.Name}$i" going to c:\somefolder\2-""c:\somefolder\${$.Name}" going to c:\somefolder\2-"but
"c:\somefolder\${$_.Name}" going to c:\somefolder\file.exe"What am I doing wrong? How can I combine two variables together
|
Powershell pipe variable inside string
|
If I understand ok, you can do the following:Application layout:yield :stylesheets if content_for? :stylesheetsIn specific view:content_for :stylesheet do
= stylesheet_link_tag "specific_style"Make sure to precompile "specific_style" inapplication.rb:config.assets.precompile << ['application.css', 'specific_style.css']
|
What I mean when I ask this is, "Is there a way that an individual view can load w/ specific stylesheets and resources?".Now, I'm not referring to the classic usages of script and link HTML tags to reference resources, primarily because when Rails loads, it precompiles ALL of the available assets provided.The main reason I want to do this is because after a while I find that there are so many good web development frameworks out there that it can be very fun to mix and match various ones, and it's really hard to target one framework to use all the time. Also, there can usually be a lot of conflict between various frameworks because of like naming systems (e.g. Bootstrap 2 alongside Bootstrap 3).
|
Is there a way to load specific assets in Rails for individual views?
|
Yes, collect the data in datastore, and send once a day. A typical model might be (in python):class DigestEmail(db.Model):
recipient = db.StringProperty()
pdf_id = db.StringProperty()
sent = db.BooleanProperty(default=False)Then when you need to send an email from your taskqueue, create a DigestEmail entity. Then, once a day (or whatever), query your DigestEmail entities where sent = False, ordered by recipient, like this:query = DigestEmail.gql('WHERE sent = False ORDER BY recipient')Then iterate through your query results and group together by recipient. Send the email, and set the sent property to True to prevent it being sent again. (Alternatively, delete the entities altogether).
|
I have GAE application. I have Emails to the NoSQL and their refresh tokens of google Drive. I haveCron jobwhich firesPush queueforeach PDFin order to download this.No I want to send email toeachuser about their PDF data.I cant sendemail for each document(for example if user[email protected]have10 pdfdocument), I can send10 emailto this mail -each pdf task sends each mail.BUThow to collect users data and send together?Each task work on each PDF. I should collect each user data together. and I shoud send one email about all the document (in my example, one emailwant to contain 10 pdf document data).I have one idea - to save that data in Datastore, and next day another , another Cron job will collect data from DB and send mails. is this way good?
|
GAE collection task data support
|
I'm solved it. I added tee tv1 for images and tv2 for video.---- queue leaky=1 ! tee name=tv1
/
tee ----
\
---- queue leaky=1 ! tee name=tv2
gst-launch-1.0 rtspsrc latency=2000 location=rtsp://192.168.1.16/live2.sdp name=src ! queue ! rtpmp4vdepay ! decodebin ! videorate ! video/x-raw,framerate=15/1,format=I420 ! videoconvert ! tee name=tv tv. ! queue leaky=1 ! videoparse width=640 height=480 framerate=15/1 ! tee name=tv1 tv. ! queue leaky=1 ! videoparse width=640 height=480 framerate=15/1 ! tee name=tv2
tv1. ! queue ! videoparse width=640 height=480 framerate=15/1 ! videoscale ! video/x-raw,width=320,height=240 ! videorate ! video/x-raw,framerate=1/60,format=I420 ! jpegenc quality=20 ! multifilesink location=/tmp/%06d-low.jpg
tv1. ! queue ! videoparse width=640 height=480 framerate=15/1 ! videorate ! video/x-raw,framerate=1/60,format=I420 ! jpegenc quality=60 ! multifilesink location=/tmp/%06d-mid.jpg
tv2. ! queue ! videoparse width=640 height=480 framerate=15/1 ! videoscale ! video/x-raw,width=320,height=240 ! x264enc bframes=0 bitrate=240 speed-preset=superfast ! mpegtsmux ! multifilesink location=/tmp/%06d-low.ts next-file=2
tv2. ! queue ! videoparse width=640 height=480 framerate=15/1 ! x264enc bframes=0 key-int-max=15 bitrate=460 speed-preset=superfast ! mpegtsmux ! multifilesink location=/tmp/%06d-mid.ts next-file=2
|
I'm have pipeline. It takes RTSP stream from camera, saves HLS segments and frames every minute:gst-launch-1.0 rtspsrc latency=2000 location=rtsp://192.168.1.16/live2.sdp name=src ! queue ! rtpmp4vdepay ! decodebin ! videorate ! video/x-raw,framerate=15/1,format=I420 ! videoconvert ! tee name=tv
tv. ! queue ! videoparse width=640 height=480 framerate=15/1 ! videoscale ! video/x-raw,width=320,height=240 ! videorate ! video/x-raw,framerate=1/60,format=I420 ! jpegenc quality=20 ! multifilesink location=/tmp/%06d-low.jpg
tv. ! queue ! videoparse width=640 height=480 framerate=15/1 ! videorate ! video/x-raw,framerate=1/60,format=I420 ! jpegenc quality=60 ! multifilesink location=/tmp/%06d-mid.jpg
tv. ! queue ! videoparse width=640 height=480 framerate=15/1 ! videoscale ! video/x-raw,width=320,height=240 ! x264enc bframes=0 bitrate=240 speed-preset=superfast ! mpegtsmux ! multifilesink location=/tmp/%06d-low.ts next-file=2
tv. ! queue ! videoparse width=640 height=480 framerate=15/1 ! x264enc bframes=0 key-int-max=15 bitrate=460 speed-preset=superfast ! mpegtsmux ! multifilesink location=/tmp/%06d-mid.ts next-file=2It's works. But if I try to change x264enc speed-preset to better than superfast, pipeline not works (no errors, but no files appear).It starts to work if I'm delete JPG parts and leave only TS.Maybe I'm doing something wrong? How do I make video quality better?
|
Gstreamer 1.0 strange pipeline behavior
|
I assume that this is fixed in newer versions of gstreamer.
In my 1.2.3 build I cannot reproduce this at least.
|
I have a working gst-launch pipeline in 0.10 :gst-launch-0.10 \
filesrc location=c:/prog4.mpg \
! tsdemux name=dem \
! queue \
! ac3parse \
! a52dec \
! audioconvert \
! audioresample \
! autoaudiosink \
dem. \
! queue \
! mpegvideoparse \
! mpeg2dec \
! autovideosinkBut same pipeline in version 1.0 spews the error :Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
Redistribute latency...
ERROR: from element /GstPipeline:pipeline0/GstTSDemux:dem: Internal data stream error.
Additional debug info:
mpegtsbase.c(1639): mpegts_base_loop (): /GstPipeline:pipeline0/GstTSDemux:dem:
stream stopped, reason not-negotiated
ERROR: pipeline doesn't want to preroll.
Setting pipeline to NULL ...
Freeing pipeline ...I also tried using playbin, which strangely enough, gives the same error. How do I fix this?EDIT:Okay, so I've figured out that the it's the video part that's causing trouble. If I isolate the audio and video parts, theaudio works fine! It's this bit that's causing trouble:gst-launch-1.0 filesrc location=/home/rubndsouza/prog4.mpg \
! tsdemux ! queue ! mpegvideoparse ! mpeg2dec ! autovideosinkAny help would be appreciated. Thanks!
|
Gstreamer pipeline to play mpegts file works in version 0.10 but not 1.0
|
Apparently, the problem is that the password being passed through the pipeline includes a space--the one that appears between our hypothetical "d" and the pipe symbol itself. :)So, for future reference, this works:echo password|gpg --batch --yes --passphrase-fd 0 "filename"Which, by the way, is exactly what the guide had said, but which I never caught onto because I did my initial testing in Powershell and didn't realize how picky cmd's echo command could be.
|
So the larger context of this problem is that it isn't possible, for whatever reason, to decrypt this file using, say, Bouncy Castle, so we're trying to do an automated command line with the normal gpg utility instead... I originally thought that would be quicker than trying to figure out why Bouncy Castle doesn't believe this is a real PGP-encrypted file, but I might have been wrong.Here's the pipeline:echo password | gpg --batch --yes --passphrase-fd 0 "filename"This works perfectly in Powershell. Actually, several variations on this work perfectly in Powershell, but that's not the point...The point is that I'm trying to run this in cmd.exe and it doesn't work there. Instead, I get an error saying that there has been no password provided and that, therefore, there is no secret key available and that, therefore, the file cannot be decrypted.Given that the instructions I read for this are specificallyforcmd.exe (not Powershell), I'm more than a little confused. Any idea what's going on here?
|
Pipe works in Powershell but not CMD?
|
grep '.*|.*|.*'will select lines withat leastthree fields and two separators.
|
I have a file that include the following lines :2 | blah | blah1 | blah | blah3 | blah2 | blah | blah11 | high | five3 | fiveI wanna extract only the lines that has 3 columns (3 fields, 2 seperators...)I wanna pipe it to the following commands :| sort -nbsk1 | cut -d "|" -f1 | uniq -dSo after all I will get only :21Any suggestions ?
It's a part of homework assignment, we are not allowed to use awk\sed and some more commands.. (grep\tr and whats written above can be used)Thanks
|
bash - extracting lines that contain only 3 columns
|
probably you have folder app/assets/javascript if you will put it there it will be loaded in your application.js fileWith basic configuration all js files in assets are merged into one optimized file which you include in your viewsIf you want to understand it better I recommend youhttp://guides.rubyonrails.org/asset_pipeline.htmlSame with images you can put them into app/assets/images/ styles into app/assets/stylesheets, if you will use folder names it should not be big mess.From the other side if you want to keep them in one place you can still copy whole folder into public and you will have access to that files for example if you will put it into public/fancybox/...
your path will belocalhost:3000/fancybox/fancybox.jsand you can load it on every page which will need that
|
I need to add Fancybox to my Rails app. Normally I would just use the gem and add the required lines to application.js and application.css. However, I am using a bunch of different templates within my app, and I have to link in the stylesheets and js files manually (using <%= javascript_include_tag ... %>, for example) because some templates use some of them, and others dont.On the gem's instructions page, it says I can add the assets into the lib/assets directory manually. I've never done this before. Do I need to manually copy the images into a lib/assets/images folder, and the js files into the lib/assets/javascripts folder, etc. or is there a way to put them all into one single "fancybox" folder so that they can all stay organized?I will be adding more things like this into the app, and I dont want the images, javascripts, and stylesheet folders to just become a big mishmosh of different files from different plugins.I hope this makes sense and thanks.
|
How to manually add a plugin like Fancybox to your Rails app
|
The interesting bits of code like this tend to happen insomethingelse. Some known purposes of such code are:runtime state retrieval; In x86, for example,__asm__("call 0f\n0: pop %0\n" : "=r"(pc))is a way to retrieve the program counter (IP register - this is hidden and not directly accessible, so the factcallpushes it to the stack is used to retrieve it).Beware this isn't safe to use in leaf functions in 64bit mode due to thered zone- seeInline assembly that clobbers the red zone. The correct way to do it on x86_64 isasm("lea 0f(%%rip), %0\n0:\n" : "=r"(pc))which exploits the fact that PC-relative addressing is possible in 64bit mode.instrumentation(debugging / runtime tracing), e.g. by putting tracing code /NOPslots in there that tracing utilities at runtime can modify to dynamically hook into the code. Solaris DTrace uses such techniques.On ARM (and 64bit x86), the method is also used toembed constantswithin the code, for use with PC-relative loads.Whether unconditional branches like this cause branch prediction miss penalties or other type of stalls is very CPU-dependent.
|
updatedChanged the 2nd line of assembly to the mnemonic actually being used (mflr) and added more info at the bottom.I ran across some code (using gcc) resembling the following (paraphrased):#define SOME_MACRO( someVar ) \
do { \
__asm__ ( \
" b 0f\n" \
"0: mflr %0\n" \
: "=r"( someVar ) \
); \
} while(0)... where thebinstruction (ppc) is a short jmp andmflris getting the contents of the 'link register' -- which is similar to the program counter in some respects. I've seen this sort of thing for intel code as well (cf. the accepted answer inthisquestion).The branch acts as a no-op ... my question: what purpose does this serve?I'm guessing it has something to do with branch prediction stuff, but so far I've only found people's code using this idiom while searching.It looks like I was wrong on the branch prediction guess.mflrgrabs the contents of the link register.So, my question boils down to: why is the branch necessary.
|
Inline assembly with "jmp 0f" or "b 0f" at the beginning
|
IIS 8 comes with .net framwork 4.5 so you may be running into missing the fixhttp://support.microsoft.com/kb/2828842Issue 6SymptomsWhen you send many concurrent requests that have the same SessionId to an ASP.NET 4.5 web application, some requests may freeze at the RequestAcquireState stage unexpectedly.
ResolutionAfter you apply the hotfix, the hotfix makes sure that the EndRequest event will always trigger.try installing this fix and see if that addressed the issue
|
We are trying to make a cut over from IIS 6.0 to IIS 8.0 Integrated pool on Windows Server 2012 Standard edition for an application built on ASP.Net Version 4.0. Our web application requests go into a RequestAcquireState ( ASP.Net Session gets locked for concurrent requests working with the same sessionid ) , in IIS 8.0 Integrated pool on the above Windows Server . However this behavior does not show up when we run the same app in Classic mode on IIS 8.0 .Session is stored InProc.We can rectify this situation on a Windows Server 2012 Data Center by modifying SessionStateLockedItemPollInterval in registry. However that solution does not work in Windows Server 2012 Standard edition.This has left us perplexed -why does an ASP.Net Run time Session issue surface in IIS 8.0 Integrated Pool for an application we have run successfully on previous versions of IIS and classic mode in IIS 8.0 ?How do we rectify this problem now on Windows Server 2012 Standard edition ?Thanks, will appreciate if some body can help
|
IIS 8.0 integrated pipeline Session RequestAcquireState
|
Not all Java libraries are compatible with Android. Do what RenegadeAndy wrote here, and if that doesn't work, the lib might really be incompatible with Android.
Remember, Android is not a real Java implementation.
|
I'm having this problem, and tried googling and doing tutorials in eclipse. But, it wasnt really helpful for me to use this .jar file in my project.Source:https://github.com/brunodecarvalho/hotpotatoTo be specific, I downloaded this jar file , added it to my project successfully by copying the jar file into my source folder , then added it by doing(Properties->Add Jars->And added it). Also i did a Project -> Clean.Then once I start coding as they have shown in the examples, it gives me errors, which proves that the Adding of Jar file was not successful.I tried in different eclipse workspaces but still no luck.If I explain what I'm trying to do here, I'm working on a Android Download Manager Project. What I need to do is to create a pipeling connection to the request URL so that I can download the packets parallel.Please help me find my mistake. Thanks for your time!
|
Android - Adding JAR File to eclipse (hotpotato API)
|
Thanks for your answer Jay but as so often happens I walk away from the problem & realise the simple answer to the problem. Unfortunately I cannot test it out until I get back to work tomorrow but the scenario is one complex xml string in seven xml documents out BUT they're of different types and I set the receive shape to take messages of the 7 out. I think what I need to do is set the message type to XmlDocument instead and then cast afterwards in the orchestration.
|
I'm not entirely sure if this exception is anything to do with the custom pipeline component I have created or not. I have loaded the code in VS2010 and attached to the BTSNTSVC.exe but before I even hit the first break point I get this error:There is no disassembly to view and the code (for my component) works fine in a console application with the same input file.This pipeline component is on a receive port. any ideas? Thanks
|
BizTalk Custom Pipeline Component: XmlException crossed a native/managed boundary
|
This can be easily implemented using PHP. TheHttpRequestPoolcan be used to build a custom client doing just that. Also seeHow can I make use of HTTP 1.1 persistent connections and pipelining from PHP?With Go it's also fairly easy, if you create the connection yourself, you just have to send all the requests and then you can read responses sequentially, and it will send it all through one http pipelined connection.conn, _ := net.Dial("tcp", "127.0.0.1:80")
client := httputil.NewClientConn(conn, nil)
req, _ := http.NewRequest("GET", "/", nil)
client.Write(req)
resp, _ := client.Read(req)You should do some more error checking though.
|
I am looking for a client software that will run on a Unix system to do long-polling with multiple requests in a single http pipeline.Basically we need to issue several long-polling GET requests to a server. All the requests need to be done within a single HTTP pipeline.The client needs to have N requests open at any given time, whereN > 1.The server will respond either with a200 OKor204 No Content.In case of a200 OK, the response needs to be piped into a new process.
|
Client to do long-polling with multiple requests in a http pipeline.
|
Because the value read from memory to be loaded into R1 hasn't yet been written to the register file. If you were to read the value from R1, you would get the value R1 contained before the R1<-M1 instruction. The new R1 value is stored in the ME->WB pipeline register after ME1.
|
Hi Suppose the below instruction :R1<-M1
R2<-M2
R3<-R1*R2
M3<-R3Now We Will create a pipeline like below pipeline without bypassing:[XXX : bubble]IF1 ID1 EX1 ME1 WB1
IF2 ID2 EX2 ME2 WB2
IF3 XXX XXX XXX ID3 EX3 WB3
XXX XXX XXX XXX IF4 ID4 EX4 WB4And We will create a pipeline with by-passing like below pipeline:[XXX : bubble]IF1 ID1 EX1 ME1 WB1
IF2 ID2 EX2 ME2 WB2
IF3 XXX ID3 EX3 WB3
XXX XXX IF4 ID4 EX4 WB4We should wait until WB1 and WB2 to be done, then we can execute Instruction 3.
So in bypassing method we will store R1 and R2 values after the EX1 and EX2 levels into a buffer.But...
In by-passing way,after EX1 , how can we get the register R1 value ?? we haven't reach to the WB1 value . Why we need a buffer , Why not to read R1 directly??
|
By Pass Method for Instruction Pipeline
|
We are not allowed to change the pipeline directory structure of the following segments:AddInViewsAddInSideAdaptersContractsHostSideAdaptersYou should read thepipeline directory guidelines.Note that nothing stops you from naming the assemblies with your own names. You can have hv.dll instead of HostView.dll.Regards,
|
I have built a pipeline for addins using C#. Once I build the projects, how can I update the code so that it will use the .dll files in the root directory and not in the typical add-in sub-directories?Example - currently:\addins\AddIns.store; \addins\<all the addin that i have built in sub-directories>
\addinsideadapters\AddInSideAdapters.dll
\addinviews\AddInView.dll
\contracts\MyClass.Contracts.dll
\hostsideadapters\HostSideAdapters.dll
\hostview.dll
\application.exe
\pipelinesegments.storeIdeally (respectively):\ai.store; \addins\<all the addin that i have built in sub-directories>
\aisa.dll
\ain.dll
\myclass-c.dll
\hsa.dll
\hv.dll
\application.exe
\ps.storeCode:_addins = new List<MyClassBase>();
String path = Environment.CurrentDirectory;
AddInStore.Rebuild(path);At this point the AddInStore object has been built and when I breakpoint here the AddInStore object already has the directories set:AddInAdaptersDirName = "AddInSideAdapters"
AddInBasesDirName = "AddInViews"
AddInCacheFileName = "AddIns.store"
AddInsDirName = "AddIns"
ContractsDirName = "Contracts"
HostAdaptersDirName = "HostSideAdapters"
PipelineCacheFileName = "PipelineSegments.store"Is it possible to manually set this object to achieve my ideal directory and file structure for the add-ins?
|
Custom Pipeline File and Directory Structure in C#
|
How practical is using the MEF pipeline in your application for a
add-in/plug-in environment?i useMEFall the way for my loosly coupled applications.What is it? The Managed Extensibility Framework (MEF) is a composition
layer for .NET that improves the flexibility, maintainability and
testability of large applications. MEF can be used for third-party
plugin extensibility, or it can bring the benefits of a
loosely-coupled plugin-like architecture to regular applications.btw PRSIM can be used with MEF, but there are a lot of other examples in the www.
|
How practical is using the MEF pipeline in your application for a add-in/plug-in environment?If, for example I want to create a basic reporting base class, then extend the functionality using some kind of add-in setup (like the MEF pipeline), how practical is it to use it in this setup?I haven't many applications using this model (if someone has a list of commercial software using this I'd be interested to check it out)
|
Practicality of Add-In Pipeline in C#
|
Ok, I figured PCWrite, IF/DWrite is used for stalling, set to 0 to stall
|
From the Patterson/Hennessy book:Whats PCWrite & IF/DWrite (2 left most control signals from hazard detection unit)
|
What does PCWrite & IFWrite in MIPS Pipeline do/refer to?
|
Pipelines aren't supported in WebForms. (xrefDeploying Applications and Components to .NET)Good luck,Terry.
|
I'm trying code similar to below to create a data pipeline to migrate data from a database to another.
The pipeline works fine with the desktop application, but when I migrate the application to the .net web forms application to use on the internet, the pipeline does not work. It returns the error code "-1" (while on desktop, it returns 1).
Can someone tell me what is problem, why it does not work on the internet? I am using Powerbuilder classic 12, with Sybase Anywhere 12 using ODBC on Windows XP/IIS 5.1.Transaction trans_source, trans_dest
trans_source=CREATE Transaction
trans_dest=CREATE Transaction
trans_source.DBMS = "ODBC"
trans_source.DBPARM = "ConnectString='DSN=db1;UID=dba;PWD=sql"
trans_dest.DBMS = "ODBC"
trans_dest.DBPARM = "ConnectString='DSN=db2;UID=dba;PWD=sql"
connect using trans_source;
connect using trans_dest;
lp_Create=CREATE p_pipe
lp_Create.DataObject="p_create_tableA"
result_value = lp_Create.Start(trans_source,trans_dest,dw_errors)
messagebox("result", result_value)
|
PowerBuilder Pipeline does not work in web application?
|
You have to intercept request event after the handler and the state have been assigned.
Use PostAcquireRequestState event in your particular scenariocontext.PostAcquireRequestState += OnPostAcquireRequestState;
|
I have a http module in my asp.net c# web application.I insert a value into context items on the OnPreInit of my Page.I want to read this value from the context items in my httpmodule. However I can't find which event in my httpmodule this can be read at. The latest I have tried is the PreRequestHandlerExecute event.Could someone point me in the correct direction what event in the httpmodule I can read this value from the context items?
|
read from context items in httpmodule
|
FetchData()seems to be an asynchronous operation. So, before the actual operation is complete, the function returns and hides the loading bar. I suggest you use anAsyncTask.To show a progress dialog while an AsyncTask runs, you may callshow()inonPreExecute()and callhide()inonPostExecute(). CallFetchData()fromdoInBackground(). This will start the ProgressDialog before the AsyncTask does it's background method and will stop the ProgressDialog when it completes.
|
When I define progress dialog functions such aspublic static void showLoadingBar(Context context)
{
dialog=new ProgressDialog(context);
dialog.setMessage("Please wait");
dialog.show();
}
public static void hideLoadingBar()
{
dialog.dismiss();
}I wanna use it like following:UiManager.getInstance().showLoadingBar(this);
FetchData();
UiManager.getInstance().hideLoadingBar();But I have never be able to show LoadingBar until I comment the
UiManager.getInstance().hideLoadingBar(); line such like thatUiManager.getInstance().showLoadingBar(this);
FetchData();
//UiManager.getInstance().hideLoadingBar();What this cause is, always ProgressBar on the screen. Is there anyway to get rid of this issue?
|
In Android, usege of ProgressDialog.show() and ProgressDialog.hide() problem during fetching data from the internet
|
You claim that the first pipeline stopped working but you don't explain what happened. Things stop working because something else changed:
- version of GStreamer and submodules ?
- version of OS ?
- version of camera ?It shouldn't be necessary to add a bunch of queues in a row. In practice they will create thread boundaries and separate the part before and after across threads, and it will add the delay you see, which will affect the latency and sync.
|
I have a C program that records video and audio from a v4l2 source into flv format. I noticed that the program did not work on newer versions of ubuntu. I decided to try to run the problamatic pipeline in gst-launch and try to find the simplest pipeline that would reproduce the problem. Just focusing on the video side I have reduced it to what you see below.So I have a gstreamer pipeline that was working:gst-launch v4l2src ! tee name="vtee" ! queue ! videorate ! ffmpegcolorspace ! ffdeinterlace ! x264enc ! flvmux name="mux" ! filesink location=vid.flv vtee. ! queue ! xvimagesinkNow it will only work if I do this with a bunch of queue's added one after another before the xvimagesink. Although this does work I get a 2 second delay before the pipeline starts to work and i also get the message :gst-launch v4l2src ! tee name="vtee" ! queue ! videorate ! ffmpegcolorspace ! ffdeinterlace ! x264enc ! flvmux name="mux" ! filesink location=vid.flv vtee. ! queue ! queue ! queue ! queue ! queue ! xvimagesinkAlthough the second pipeline above works, there is a pause before the pipeline starts running and I get the message (I don't think this system is 2 slow, its a core i7 with tons of ram):Additional debug info:
gstbasesink.c(2692): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstXvImageSink:xvimagesink0:
There may be a timestamping problem, or this computer is too slow.Can any one explain what is happening here? What am I doing wrong?
|
gstreamer pipeline that was working now requiring a bunch of queue components, why?
|
If you are using Servlet 3.0 you can. What you do is implement either aServletContextListeneror aServletContainerInitializer. The code below shows withServletContextListener@WebListener("auto config listeners")
public class MyListenerConfigurator implements ServletContextListener {
public void contextInitialized(ServletContextEvent scEvt) {
ServletContext ctx = scEvt.getServletContext();
FilterRegistration.Dynamic reg = ctx.addFilter("myFilter", "my.listener.class.MyListener");
...
}See EE6 docshere. Perhaps the only drawback is that you can add but you cannot remove. And you can only at when the application starts up.Note: code not tested
|
Can I write a module/filter that gets put into the processing pipleline in Tomcat BEFORE the web application even gets run?Something that I could turn on/off for each web application that tomcat is handling.is this possible?So basically it would be a re-usable filter that hooks into the web pipeline that could alter the requests behavior or perform/modify the requests. One example would be to log all ip's, or redirect based on the url, block the request, etc.
|
Can I write a module/filter that gets fired before the web app get's run in Tomcat?
|
I'm not familiar with the LZMA SDK but in C# with the SharpZipLib library it's easy to stream out zip files. You don't have to worry about memory, only the blocks being compressed/streamed will be in memory at any one time. We use this to compress and stream files via HTTP but the concept for FTP is the same.Basically you create aZipOutputStreamthat passes data off to the FTP stream. CallPutNextEntryat the start of each file and then stream the file contents. Not much more to it than that.http://www.icsharpcode.net/OpenSource/SharpZipLib/
|
I am writing a utility that will zip a file (or set of files) using the LZMA sdk then send the file off to a ftp server. Usually the speed of the compression is faster than the speed of the ftp connection. What I would like to do is instead of compressing the file, waiting for it to finish, then starting the upload I would like to compress to a temporary file or stream, then while it is being compressed upload the completed portions.The question now is how?One concern I have is the files I will be working with can be over 1GB when compressed and the systems I will be running this on will have between 512MB and 2GB of ram so I do not want to let the compression side run wild in to memory and lock up the system. The method I have been thinking about is running the compression in a thread, Queuing up 5-10Mb in a memory stream, then send the info to the ftp in the other thread. Is this a good approach or is there a better way to do it? Are there any gotchas like it needs to rewrite the file header at the start of the file when it is done or anything else?I plan on writing this in c# but code examples in c, c++, or java are fine too.Thank you for your help.
|
Pipeling compression and ftp transfer
|
What kind of encryption are you looking for? Are you looking for raw RSA encryption, or any specific message format?Out of the box, BizTalk only supports S/MIME encryption using the SMIME encoder/decoder component; it might be useful depending on exactly your format is.As for how to create a custom pipeline component from scratch to do it, I recommend first starting with thePipeline Component Wizard. It will take care of most of the boilerplate code.I do have asampleon writing custom encryption pipeline components, though my specific sample uses symmetric encryption and not RSA (but should give you a clue as to how to implement this). The code for these components can be foundhere.
|
I need to use X509 certificate in the BizTalk Custom Pipeline component to Encrypt/Sign the message and to Decrypt/Verify signature, please let me know some good samples/artcile/blogs etc which explains how to acheive this.RSA needs to be the encrypton algoritham.Thanks.
|
Encryption/Decryption using X509 certificate in biztalk custome pipeline component
|
Finally I implemented the filterchain using dom4j and xpath.
I decided for this api because it is quite handy if you got to move a number of branches inside one document and it's build in xpath facilitates finding the wanted elements.Thanks for your answers.
|
I'd like to filter a couple of nested xml elements, evaluation of their attributes. For this purposes I'm searching a efficient and lightweight java api or framework.The main requirements are:filtering of element bodies, based
on some pattern or conditionevent based XML transformationMy first idea was apache jelly, but jelly has an ungly side effect. It removes CDATA tags and thats an unwanted behaviour.Thanks in advance.
|
Perform xml transformation and filtering in java
|
You can do this in BizTalk by taking advantage of several of it's features. The first feature is all about batching and debatching use envelope schemas. These are techniques used to split an XML document into many smaller documents (ie. An XML doc that contains 500 purchase orders into 500 XML Docs each containing one purchase order). And then likewise to assemble them again on the send. Here is an article on how to achieve this:http://msdn.microsoft.com/ja-jp/library/aa578216.aspxDepending on what exactly you are doing too, you can use an XPath expression to debatch the message in an orchestration and handle each individual message in the orchestration. The orchestration can then reassemble the outgoing messages into a single instance.Tell us a littl emore about what you are trying to do.
|
In Biztalk 2006 I have a custom pipeline that split a file into many files, before each file get mapped. In the send pipeline I use "Use Temporary file for writing". My question is: When splitting messages and use "Use Temporary file for writing", will every one of the splitted files be moved to the out folder from the temp folder at the same time or is each one of the splitted files moved to the out folder as soon as it's done, not waiting for the other files? The files are very smal so I haven't found out. I just want to know the standard behaviour so I don't have to use much time to create big files and watch the result. Thanks for help :)
|
Biztalk Splitting a file and the use of
|
To rename a folder in ADLS Gen 2 using REST API and web activity, follow the procedure below:First, add thestorage blob data contributorrole to your Synapse workspace managed identity as follows:Step 1: Go to the IAM of the ADLS account and click on "Add role assignment" as shown below:Step 2: Search for thestorage Blob data contributorrole and select it, as shown below:Step 3: Select the Managed identity, as shown below:After successfully assigning the role, go to the Synapse workspace, create a pipeline with web activity, and configure the web activity as follows:Mention the URL in the format below:https://<storageAccountName>.dfs.core.windows.net/<containerName>/<newNameOfFolder>Select thePUTmethod and add@toLower('')dynamic content to the body. SelectSystem Assigned Managed Identityas the authentication method withhttps://storage.azure.comresource. Addx-ms-rename-sourceas a header with the value/<containerName>/<oldNameOfFolder>, as shown below:Debug the pipeline; it will run successfully and rename the folder. You can check the images below:Before renaming:After renaming:
|
How can I use the Azure Data Lake Storage Gen2 REST API to rename a folder in an Azure Storage Account using the web activity in a Synapse pipeline?
The folders are in the same container, and I need to rename them by executing a pipeline.
|
Call Azure Data Lake Storage Gen2 REST APIs using the web activity in an Azure Synapse pipeline
|
rules:are only evaluatedwhen the pipeline is created. When you are starting a manualjob, specifying any variables will not affect whether the job runs or not because therules:were evaluated at pipeline creation time.I want the job to run only if the Value of VERSION is "TEST-INPUT". All other cases the job fails.Since you want to evaluate this condition after pipeline creation time, you can't userules:. Instead, you can just add this logic to the job itself:myjob:
when: manual
before_script:
- |
if [[ "$VERSION" != "TEST-INPUT" ]]; then
exit 1
fi
script:
- echo "Version is ${VERSION}"
|
So I have the following job I created. I want the job to run only if the Value of VERSION is "TEST-INPUT". All other cases the job fails. How would I do this? So far, If I enter any value for VERSION in GITLAB, the job still runs.test-manual-job:
stage: test-stage
when: manual
variables:
VERSION: "${VERSION}"
rules:
- if: '$VERSION == "TEST-INPUT"'
allow_failure: true
script:
- echo "${VERSION} is the hard-coded input"
|
How to run Manual Gitlab job if variable value is correct?
|
As the variableSCHEMA_SUFFIXis a runtime variable we couldn't make any conditions with that variable. So I moved the condition check to theenvpart of the bash script and made the conditional statement there and set the variable to a new key like the below in a bash.- job: working_on_compute
timeoutInMinutes: 180
dependsOn: gettoken1
variables:
SCHEMA_SUFFIX: $[env.config.outputs['checkconfigtask.startdate'] ]
DB_SCHEMA: $[env.config.outputs['checkconfigtask.startdate'] ]
|
- job: working_on_compute
timeoutInMinutes: 180
dependsOn: gettoken1
variables:
SCHEMA_SUFFIX: $[env.config.outputs['checkconfigtask.startdate'] ]
${{ if eq(length(variables['SCHEMA_SUFFIX']), 0) }}:
DB_SCHEMA: ""
${{ elseif ne(length(variables['SCHEMA_SUFFIX']), 0) }}:
DB_SCHEMA: "_$(SCHEMA_SUFFIX)"HereSCHEMA_SUFFIXis a runtime variable. I need to make a dynamic calculation and find theDB_SCHEMAvariable . How to solve this issue
|
How to create a new variable from a runtime variable in azure devops pipeline
|
Your Azure credentials have not been set up or have expired, please run Connect-AzAccount to set up your Azure credentials. No certificate thumbprint or secret provided for the given service principal '***'.Based on the error message, the Connect-AzAccount need to be run before the Get-AzSqlDatabase command.To solve this issue, I suggest that you can change to useAzure PowerShell task.It will automatically running the Connect-AzAccount command.Here is an example:steps:
- task: AzurePowerShell@5
displayName: 'Azure PowerShell script: InlineScript'
inputs:
azureSubscription: 'xx'
ScriptType: InlineScript
Inline: |
$resourceGroupName = "xxx"
$serverName = "xxx"
$databases = Get-AzSqlDatabase -ResourceGroupName $resourceGroupName -ServerName $serverName
echo $databases
azurePowerShellVersion: LatestVersionResult:
|
When using the Azure CLI task within an Azure DevOps pipeline to run a powershell script as per the below:$resourceGroupName = "rgname"
$serverName = "sqldbname"
$databases = Get-AzSqlDatabase -ResourceGroupName $resourceGroupName -ServerName $serverName
foreach ($database in $databases) {
Write-Host "Deleting database $($database.DatabaseName)..."
Remove-AzSqlDatabase -ResourceGroupName $resourceGroupName -ServerName $serverName -DatabaseName $database.DatabaseName -Force
}I'm seeing the below error thrown on the Get-AzSqlDatabase command:Your Azure credentials have not been set up or have expired, please run Connect-AzAccount to set
up your Azure credentials. No certificate thumbprint or secret provided for the given service principal '***'.I've also seen the same with the managed instance equivalent command 'Get-AzSqlInstanceDatabase'What is the best way to authenticate these types of requests that does not require interaction? (given that it is being run within a pipeline as part of an automated process).As an FYI the pipelines are being run on a self-hosted windows agent and the scripts within the job have been given access to the OAuth token.
|
Azure DevOps Pipeline not authenticating Az SQL powershell commands
|
Azure Data Factory has a feature to split service cost per pipeline. So, you can breakdown the cost to pipeline level, but there is no tagging feature is provided by Azure to seperate cost on Azure Cost Management or on the export of Azure cost data for dashboards.There is only "Annotation" structure on ADF pipeline provided and its used to monitoring purpose. If you have hundreds of pipelines and would like to filter in ADF portal>Monitoring its useful. Annotations can created on ADF Portal>pipelines. Click a pipeline and at the right top click "Properties" and "+ New" under annotations.Here is the details :https://learn.microsoft.com/en-us/azure/data-factory/concepts-annotations-user-propertiesFor the Synapse, there is no feature to enable cost per pipeline and also for the tagging for pipelines.
|
We are ingesting data from different resources to our data lake with ADF pipelines and have hundreds of pipelines (under 1 resource group). Cost per pipeline is already activated and I'm embedding Azure Portal cost data to a PowerBI Dashboard and can track cost per pipeline.However, due to pipeline owners are not stated in there, I need to find owners of pipeline each time.To overcome this issue, I tried to add annotation but couldnt see the pipeline's cost in cost management.How can I tag pipelines in ADF (I have same issue for synapse too) and split the cost of each pipeline by their tags?Thanks All!
|
ADF Pipeline Cost tagging for team based seperation
|
When we don't want to modify our input messages and want to publish unmodified messages to the output , processor can be removed and only input and output configuration are sufficient. Removing processor solved my problem . Following the working configurationinput:
http_server:
address: ""
path: /
ws_path: /ws
allowed_verbs:
- POST
timeout: 5s
rate_limit: ""
output:
kafka:
addresses:
- abc-central-1-kafka-2-client.abc.svc.prod1-us-central1.gke.kaas-prod-us.gcp.extscloud.com:16552
- abc-central-1-kafka-1-client.abc.svc.prod1-us-central1.gke.kaas-prod-us.gcp.extscloud.com:16552
- abc-central-1-kafka-0-client.abc.svc.prod1-us-central1.gke.kaas-prod-us.gcp.extscloud.com:16552
topic: abc_abc-central-1.abc-tyur-service-in.v1
client_id: abc-tyur-service-dev-1
tls:
enabled: true
root_cas_file: /Users/ca.crt
client_certs:
- cert_file: /Users/cert.pem
key_file: /Users/key.pem
logger:
level: ALLThanks to benthos community on Discord
|
i want to create a pipeline to read XML data from postman http url and consume it through benthos input configuration and publish this message to a kafka topic using benthos processor . Following is the configuration , i was trying but doesn't seem workinginput:
http_server:
address: ""
path: /
ws_path: /ws
allowed_verbs:
- POST
timeout: 5s
rate_limit: ""
pipeline:
processors:
- xml: {}
output:
kafka:
addresses:
- abc-central-1-kafka-2-client.abc.svc.prod1-us-central1.gke.kaas-prod-us.gcp.extscloud.com:16552
- abc-central-1-kafka-1-client.abc.svc.prod1-us-central1.gke.kaas-prod-us.gcp.extscloud.com:16552
- abc-central-1-kafka-0-client.abc.svc.prod1-us-central1.gke.kaas-prod-us.gcp.extscloud.com:16552
topic: abc_abc-central-1.abc-tyur-service-in.v1
client_id: abc-tyur-service-dev-1
tls:
enabled: true
root_cas_file: /Users/ca.crt
client_certs:
- cert_file: /Users/cert.pem
key_file: /Users/key.pem
logger:
level: ALL
|
Benthos pipleline to read XML from postman and publish to kafka topic
|
TLDR (ctrl+f -> CI_COMMIT_TAG):https://docs.gitlab.com/ee/ci/variables/predefined_variables.htmlLet's focus solely on Git to gain a clearer understanding.When you tag in Git, you pin a label (like "version 0.0.4") to a specific Git commit.In Git, a commit has ancestors, but a "branch" is just the current head of some line of development. Say differently, a commit captures the state of the project at a specific moment and can be associatedwith multiple branches concurrently.So, thinking that commit can be associated with multiple branches, it is challenging for GitLab (and any other CI systems) to detect what branch you tag. That's why, when you tag, GitLab thinks about it as a standalone event, which is calledtag pipeline(as opposed tobranch pipeline), where $CI_COMMIT_TAG is available, but $CI_COMMIT_BRANCH is not. And vice-versa.
|
I have a repository on a gitlab instance for which I want to set up the deployment pipeline trigger.I need a trigger rule which always triggers a pipeline if there is a new commit on "release" with a tag in SemVer format.The repository is setup with two branches "main" and "release" ("release" is protected and it can only be merged into it, directly pushing is disabled).The workflow is following:Developing and committing some changes to "main"Create a merge request "main"-> "release" and execute the merge (over the gitlab UI)Create a new tag (eg. "0.0.4") on the release branch (over the gitlab UI)
=> now the pipeline should be triggeredUnfortunately I haven't found a solution yet and don't know why my version does not work. I thought this should work, it checks if a commit tag is in the specified format and if the commit is to the "release" branch....
rules:
- if: '$CI_COMMIT_TAG =~ /^v?\d+\.\d+\.\d+$/ && $CI_COMMIT_BRANCH == "release"'
...
|
Gitlab Pipeline Trigger Rule
|
The ARM templates support to conditionally deploy resources that have defined in the template json files.You need to update the template json files by adding the conditions to conditionally deploy the resources based on the value of a parameter.For more details, you can see "Conditional deployment in ARM templates".To run the ARM templates deployment in a pipeline on DevOps DevOps, you also can use theAzureResourceManagerTemplateDeployment@3task.
|
I have three datafactories across the environments under my subscription. And all the datafactories are in sync with the same code. I am trying to test my preprod datafactory. I have a preprod yml file and I have two pipelines in my datafactory. I am trying to exclude one pipeline from my preprod datafactory. Is it possible to exclude a pipeline in my preprod datafactory.I have tried using the following code to exclude a pipeline from my yml script.steps:
- task: AzurePowerShell@5
displayName: Deploy ADF
inputs:
azureSubscription: ******
ScriptType: InlineScript
Inline:
'New-AzResourceGroupDeployment
-ResourceGroupName "******"
-TemplateParameterFile "$(System.DefaultWorkingDirectory)/Preproduction/ARMTemplateParametersForFactory-Preprod.json"/ !*. -pipeline.POC test pipeline
-TemplateFile "$(System.DefaultWorkingDirectory)/******/ARMTemplateForFactory.json"
-factoryName "*******" -Mode "Incremental"'
azurePowerShellVersion: LatestVersion
|
Is it possible to exclude a pipeline in datafactory using yaml script in AZ DevOps
|
If you want to assign values to parameters dynamically, you can follow the procedure below:Create a table in the database with ParameterName and ParameterValue columns using the code below:create table parameters(ParameterName varchar(20), ParameterValue varchar(20))
insert into parameters(ParameterName, ParameterValue) values('src_root', 'AA')Create a linked service for the database and a dataset with the linked service. Drag the Look up activity, select the dataset as the source, and select the Query option. Use the query below:select ParameterValue from parameters where ParameterName = 'src_root'Use the dynamic expression below to assign the value to the parameter:@activity('Lookup1').output.value[0].ParameterValueThe expression will give the output as shown below:
|
TLDR;Is there a way to assign synapse pipeline/activity parameter values (such as source/destination folder names etc.) dynamically i.e. reading through a config file or table instead of manually specifying the same?-----------------------------------------------------------------------I have a Synapse pipeline that has a Notebook Activity performing some transformation and copying tasks. I am passing the source and destination folders and some other parameters to the notebook through activity/pipeline parameters. However, instead of hardcoding these parameter values, I would like to define these values in a file (or a table) and then read the values from there, so I can have one source of truth for all my pipelines which will use the same parameters and also make the solution portable across different environments, without needing to modify each pipeline. Is there an expression or function that would allow me to add such dynamic content?My current solution is to define a YAML config file and add the path to this config file in each of my pipelines so it can read those values from there. But I was wondering if there is more elegant way to do it.
Another options seems to be using a lookup activity but it is bit involved and require setting up control tables and writing queries etc. ???
|
Synapse Pipeline: dynamically assign parameter values from a config file or table
|
I am not sure if I exactly understand what you are trying to do. A minimal example of what you currently have would be really helpful to understand your setting.But it seems that you want to pass input to the standard input of the program that is executed. If you are running linux you are probably looking for the pipe|operator. This is commonly used e.g. for docker login like...
job1:
script:
- echo "$CI_REGISTRY_PASSWORD" | buildah login -u "$CI_REGISTRY_USER" --password-stdin $CI_REGISTRYIf the input is much longer, instead ofechoyou could consider saving the information in a file and usecat.
|
I'm using a binary in a Gitlab pipeline.
This binary expects an input during the run which I cannot provide so my pipeline run fails.On my local setup, everything is working just fine.
$HOME/bin/binary-name ENTERbinary starts its thingsPlease input XYZ to confirm your actionXYZ ENTERdoneI tried adding an 'echo "xyz"' to my .gitlab-ci.yml but it's not working since the earlier process (which was starting the binary) doesn't finish and fails.Is there any way to provide that input?
|
How can I send command line inputs during a pipeline run?
|
I am open to better solutions however this is what I have discovered:The Pipeline framework does not natively support eval_set(). The validation data set has to be preprocessed as well as the learning data. The Pipeline fit and transform methods does not take into account the data specified in the eval_set() argument. One has to break out the steps of preprocessing and model fitting separately.See more discussions inhere
|
I want to setup a lightGBM model with early stop validation. I also want to follow the best practice of using Pipeline to combine preprocessing and model fitting and prediction. Code below:coltransformer = ColumnTransformer([
('cat', OneHotEncoder(sparse_output = False), cat_invar),
('num', 'passthrough', num_invar)])
lgbPipe = Pipeline([
('preprocess', coltransformer),
('lgb', LGBMClassifier()])
X_learn, X_val, Y_learn, Y_val = train_test_split(X, y, test_size = 0.2)
lgbPipe.fit(X_learn, Y_learn, lgb__eval_set = (X_val, Y_val))However, I got the following error message when running the code:ValueError: DataFrame.dtypes for data must be int, float or bool.
Did not expect the data types in the following fields: Geography, Gender, SurnameIt appears the validation datasets (X_val, Y_val) were not being transformed in the preprocessing step. How should I properly setup the pipeline?
|
how to properly incorporate early stopping validation in sklearn Pipeline with ColumnTransformer
|
As far as I remember it's not possible by design.If the both projects are part of a multimodule project, you need to set up JaCoCo accordingly to build an aggregate report.If you have two independent Maven projects, the idea is that you build them in separate build configurations.Another option would be to set up JaCoCo to aggregate the results of independent scans in a single report.
|
In my command execution build step I have the below script:`mvn test -Djacoco.percentage.instruction=0.00echo "##teamcity[jacocoReport dataPath='projectPath/target/coverage-reports/jacoco-unit.exec' includes='com.project.test.*' sources=' projectPath/src/main/java' reportDir='projectPath/target/temp/jacocoReport']"echo "##teamcity[jacocoReport dataPath='projectPath2/target/coverage-reports/jacoco-unit.exec' includes='com.project.test2.*' sources='projectPath2/src/main/java' reportDir='projectPath2/target/temp/jacocoReport']"`I want two code coverage reports in the build but the second echo statement is just overwritting the first one. Is there any way to integrate multiple jacocoReport per build?
|
How to integrate multiple jacocoReport in teamcity?
|
I managed to solve the problem myself. This behavior has to do with subscription settings. If I understand correctly, it is not enabled by default for the private endpoints to be created. First the resource provider Microsoft.Network must be registered to your subscription. After you do that, you can create private endpoints with Azure Data Factory.
|
I am facing an issue when creating a private endpoint to Azure SQL Server in ADF. It always fails no matter what settings of the SQL server I choose. I tried creating the endpoint separately from the left-hand menu in ADF and also while creating a Linked Service.The only information I get from Azure is "Provisioning state" status as "Failed" and the error message is simply "Failed to create PrivateEndpoint for client ". I followed steps from various YouTube tutorials and also followed official Microsoft instructions. I think I did everything correctly. The creation seems pretty straightforward but the result is always the same.I have tried with a different Azure account and it didn't work either. I've also tried creating the endpoints and all the resources in various regions (East US, West Europe). When I checked service health, there were no issues. As I have just the basic Microsoft Support plan, I could only raise a question to make sure, that this has nothing to do with my subscription. I can't raise any technical incident to MS.Is anybody facing similar issues in the last days? I've been trying for 3 days already with no positive result. I think this might be some general service problem. Especially that a similar problem already occurred some time ago according tothis thread.
|
Azure Data Factory Private Endpoint provision fails without explanation
|
Yes you can easily do something like:class CustomParsing(beam.DoFn):
def to_runner_api_parameter(self, unused_context):
return "beam:transforms:custom_parsing:custom_v0", None
def process(self, element: bytes, timestamp=beam.DoFn.TimestampParam, window=beam.DoFn.WindowParam):
parsed = json.loads(element.decode("utf-8"))
parsed["datetimefield"] = timestamp.to_rfc3339()
yield parsed
...
with beam.Pipeline(options=pipeline_options) as p:
(
p
| "ReadFromGCS" >> beam.io.ReadFromText('gs://bucket/*.json')
| "CustomParse" >> beam.ParDo(CustomParsing())
| "WriteToBigQuery" >> beam.io.WriteToBigQuery(BIGQUERY_TABLE,
schema=BIGQUERY_SCHEMA,
create_disposition=beam.io.BigQueryDisposition.CREATE_IF_NEEDED,
write_disposition=beam.io.BigQueryDisposition.WRITE_APPEND)
)Thedatetimefieldcolumn will be added and data can be inserted to that field in BQ table
|
Piggybacking off ofthis post, I want to create a beam dataflow job to load data from GCS to Bigquery. There are thousands of files within the GCS bucket, all of which are pretty massive and are compressed JSONL data. The data format makes it impossible to create a partitioned table using a date field, so I would like to add my own during the pipeline.Is it possible to add a manual field to the pipeline, separate from the compressed data, so that when I load the data from GCS to BigQuery it appears in the end BigQuery table? I would like to be able to do this without having to unzip any of the files or performing a sequentialSELECT <CRITERIA>operation on the table itself.
|
Can I load compressed jsonl data from GCS to BigQuery and add an additional date column using DataFlow
|
You should definitely use list on which you do "execute" as somebody mentioned.
Also, you should explicitly call shell, either powershell or bash or whatever to explicitly show where execution takes place.
As far as command is concerned: you are using groovy script so you should just put groovy stuff and get rid of unneeded linux shell stuff as other stackoverflower mentioned. Also, you should avoid as much apostrophes (single and double quotes) as you can to make it clear when reading - luckily this is Groovy so you can use dollar slashy string here (dollar slashy info link).The result could look like this:groovy_script = $/["sh", "-c", "find SomePath -mtime +14 -not -path SomePath/SomeFolder/*"].execute().text/$On the top of it, I wouldn't create such a complex and error prone structure like you do here, but would think of some simpler solution of your problem.
|
I'm dealing with below command that works perfectly fine directly in target machine (it returns files older than 14 days inSomePathexcludingSomeFolder):find SomePath -mtime +14 -not -path "SomePath/SomeFolder/*" -not -path "SomePath/SomeFolder/" -printWhen I try to run it on some Jenkins node (SomeNodeName) inside pipeline, filtering does not work though (it returns also files fromSomeFolder). Stage code:steps {
script {
groovy_script = '''#!/bin/bash
println "find SomePath -mtime +14 -not -path \\"SomePath/SomeFolder/*\\" -not -path \\"SomePath/SomeFolder\\" -print".execute().text
'''.trim()
Jenkins.instance.slaves.find { agent ->
agent.name == SomeNodeName
}.with { agent ->
nodeOutput = RemotingDiagnostics.executeGroovy(groovy_script, agent.getChannel())
}
println nodeOutput
}
}Why results are not being filtered while executing inside pipeline and how should it be done?
|
Bash statement works differently while executed from Jenkins pipeline
|
Not sure if one of the comments solved your issue, but you could also introduce an intermediate variablevariables:
APP_WITH_APPVERSION: $APP-$APP_VERSION
rules:
- if: $CI_COMMIT_TAG == "$APP_WITH_APPVERSION"
|
In gitlab-ci, I would like to trigger job only if$CI_COMMIT_TAGmatches string which is made of two variables.Example:job1:
variables:
APP="my-app1"
APP_VERSION="1.0.0"
script:
- some_cmd
rules:
- if: $CI_COMMIT_TAG == "$APP-APP_VERSION"this pipeline should trigger job1 only if I commit a new specific tag "my-app1-1.0.0"My point is that I use templates, and the rule should not be specific just for a one job, but for all jobs - every job has a unique values in variables ($APP and $APP_VERSION).So the pipeline should trigger ongit tagaction and only specific job. But it does not work and I'm stuck...Is there any solution to evaluate those variables values?I think that I can trigger all jobs and use if statement in a job's script block, but I would be much happier to evaluate it before the job run.
|
Gitlab-CI rule which compare variable to string of variables
|
You can directly use copy activity withHttp connectorto call your endpoints. and add it as source and the select appropriate sink. and copy the datasample liked service:Source exampleORUse copy activity with a sample .csv file with one column as the source dataset . Add additional column and provide a new column namedWebactvityoutputwith value =@string(activity('Web1').output).In the sink, use another dataset with delimited text and pass file name as with.jsonextension. setEscape characterasNo escape characterandQuote characterasNo quote character.In the mapping tab, click on Import schema and delete the column from dummy file and only keep the additionally added column.Execute the pipeline and check if the webActivity output got stored in the .json in the ADLS
|
I need to ingest data from 3 endpoints.
my pipeline is as follows:
a web activity to obtain authentication to the api via a post request and 3 web activities to retrieve the data via a get request.
I would need to ingest data from these 3 web activities to a location in the raw zone .
How can I achieve this?I have managed to make the API calls and get a response. But have no knowledge on how to proceed further
|
Ingesting data from an API and webactivity
|
You can create a stream fromjava.util.funtion.Functionand chain them withandThenmethod// declaring all pipelines
Supplier<String> startPipeline = () -> "string-input";
UnaryOperator<String> pipeline0 = s -> s+= "-0";
UnaryOperator<String> pipeline1 = s -> s+= "-1";
UnaryOperator<String> pipeline2 = s -> s+= "-2";
UnaryOperator<String> pipeline3 = s -> s+= "-3";
UnaryOperator<String> pipeline4 = s -> s+= "-4";
// chain pipelines
String outputPipeline = pipeline0
.andThen(pipeline1)
.andThen(pipeline2)
.andThen(pipeline3)
.andThen(pipeline4)
.apply(startPipeline.get()); // call them with inital value
// print result
System.out.println(outputPipeline); // it will print string-input-0-1-2-3-4java.util.funtion.UnaryOperator<String>is same asjava.util.funtion.Function<String,String>
|
Closed. This question needsdetails or clarity. It is not currently accepting answers.Want to improve this question?Add details and clarify the problem byediting this post.Closed3 months ago.Improve this questionI have a process that start reading a raw document, then remove its unusable parts, then extract its chapters, do some detections, extract specific content, etc.In my mind, the basic way to implement such pipeline is to use thestreams:Dedicated functions going fromAtoDobjects, representing the refinement steps achieved:Stream<A>→Stream<B>→Stream<C>→Stream<D>, and at the end, returningD.A way looking cleaner to me, is to usefunctions:Supplier<A>orA→Function<A,B>→Function<B,C>→Function<C,D>, and returningD.But doesJavain its latest versions offers a more convenient structural, generic class, to help creating pipelines?Seeing them as threads, for example, allowing to cancel one pipeline in one of its failing step... Or whatever useful help it offers.Reading over the Internet and S.O., I've found that once, around the years 2010, a pipeline pattern appeared, but I saw it described in text and not code.
|
Does a class help creating a pipeline in Java 17? I would like to refine a raw content to a fine object in multiple steps [closed]
|
Building the Docker image for every commit makes sense if you want to ensure that the Docker image gets built properly (to make sure there were no changes in the commit that stops the image from building / fails the build). Without this check, it is possible to make some sort of mistake (such as adding incorrect commands to the Dockerfile), then merge the commit to master, at which point the build will fail since the Docker image cannot get built. Since you ideally want to catch errors before they are merged into master, you want to build each commit.As for your second question, it is too broad. The pipeline would change depending on what frontend framework you were using, what sort of tests you wanted to run (unit tests/selenium tests, etc...).
|
I was taking a look at the jenkins in file in our organization and noticed that we are building a docker image on every commit.
In my opinion, it doesn't make sense to build a Docker image if the image is not pushed to a registry ( and destroyed when the Jenkins job is over).The only times we push the Docker image to the registry is for the following branches :
1-Develop
2-Master
3-SupportQuestion 1: Am I correct not to want to create a Docker image for every commit?
Question 2: Can anyone share with me an example of a Jenkins pipeline for a frontend application?Thank youI took a course on Docker to understand better the concept of image creation.
|
When do I need to create a Docker image in my Jenkins pipeline?
|
Could this all be done using a powershell script?Yes. You can use Rest API:Artifact Details - Get Packagesto list all version of the package.Then you can get the version of the nupkg file.Finally, you can loop all versions of the package and compare them with the current version of nupkg file. And then you can uselogging commandto set the error message of the pipleine.Here is the PowerShell example:$token = "PAT"
$packagename= "pacakgename"
$GetPackageversion="https://feeds.dev.azure.com/{Org}/{Project}/_apis/packaging/Feeds/{FeedId}/packages?packageNameQuery=$($packagename)&includeAllVersions=true&api-version=7.1-preview.1"
$token = [System.Convert]::ToBase64String([System.Text.Encoding]::ASCII.GetBytes(":$($token)"))
$response = Invoke-RestMethod -Uri $GetPackageversion -Headers @{Authorization = "Basic $token"} -Method Get -ContentType application/json
$file=Get-Item "path\xxx.xxx.nupkg"
$name= ($file.name) -split('.nupkg')
$version = $name -split('pacakgename.')
$count = 0
foreach($versionobj in $response.value.versions)
{
$Artifactsversion = $versionobj.version
echo $Artifactsversion
if($Artifactsversion -eq $version)
{
$count = $count + 1
}
}
echo $count
if($count -eq 0)
{
echo "There is no same version"
}
else
{
echo "##vso[task.logissue type=error]The version is existing in the feed."
}
|
I want to ensure that there is a warning or a failure in the build pipeline if the developer forgot to increment the version number in the packages they are developing.Currently we have NuGet artifacts that are deployed with a version number in the build artifacts. I want to be able to check these against what has been deployed in the Artifact feed.Build Pipeline artifacts exampleArtifact Feed ExampleSpecifically I want to ensure that there is not FINAL release of a package (marked with the view "Release" in the artifact feed) that is deployed in the artifact feed if we are generating PRE RELEASES for it in the build pipeline. I want to ensure that any new developments have an incremented version number.I attempted to use a task such asArtifactDeploymentDetectorTaskin conjunction with apowershelltask but I was unsuccessful in getting that functioning.Are there any tasks that would be well suited for this problem? Could this all be done using apowershellscript?
|
Check/Fail Build pipeline if build pipeline artifact has already been published to artifact feed with same version number
|
I have solved this problem by creating a new keychain in my Fastfile and using the keychain name and password in the match action.
|
I am trying to set up a pipeline for an iOS application using Fastlane and Match and I have followed the following guide.Azure iOS Pipeline GuideIs there anything that I am missing?
I have checked the certificates and the provisioning profile and everything is fine. It also runs locally, but fails only in the pipeline. Any help to point me in the right direction would be really helpful.Thanks.I have tried running it locally and it works. From the Pipeline I have tried skip the Generating of app.dSYM step but that did not help either.
|
Azure iOS Build gets stuck at Generating '<AppName>.app.dSYM' step
|
For me it's not entirely clear how the bat and python scripts should interact and how the variables should be defined.But generally you could use a Downstream pipeline, where you retrieve the .bat files from artifacts of the parent pipeline and after the job is finished you could delete the bat-files from the repository.1. Save bat as pipeline artifacts in project-a:build_artifacts:
stage: build
artifacts:
paths:
- bat1.bat
- bat2.bat
- bat2.batIn the project-b pipeline you could retrieve the artifacts like this:test:
stage: test
script:
- python myscript.py -input bat1.bat
needs:
- pipeline: $PARENT_PIPELINE_ID
job: build_artifactsNow your python script could use the bat files, because they are downloaded in your pipelines workdir2. You could trigger project-b pipeline from project-a like this:trigger_job:
trigger:
project: project-group/project-b
strategy: dependThestrategy: dependmakes the job wait for the result of the child pipeline, if you want this.Finally you could define a job to delete the *bat files, but you would need to create a token to use the Gitlab API, because the 'CI_JOB_TOKEN' has no access to modify the repository.By usingstagesor theneedskeyword you would have to ensure that the jobs run in the right order
|
I have a very specific way I need my project to work. I have .bat files being committed to project A with a collection of arguments that need to be actioned by a Python script on project B.I need project A to be triggered when a new file is committed and then triggering project B to run the script using the variables from the .bat file (sys.argv[1] - [4]Once project B has been triggered I would like the file in Project A to be deleted so that when the next file is committed to project A there isn’t any conflicting files as there could be up to 100.bat files committed in one dayI have tried simply using power automate to trigger a pipeline once the .bat file is ready however due to organisation blokes I wasn’t able to do so. I have tried going through the GitLab wiki and it seems possible however it’s not clear how my .gitlab-ci.yml is structured on each project for my specific need
|
GitLab triggers between 2 projects
|
You could use a simple$lookupto find the documents already having a recovery. Then, you can deduce the documents without recovery because they have 0 matching recovery.Here an example of how you could implement that :db.item.aggregate([
{
"$lookup": {
"from": "item_recovery",
"localField": "_id",
"foreignField": "_id",
"pipeline": [
{
"$project": {
_id: 1
}
}
],
"as": "hasRecovery"
}
},
{
$match: {
$expr: {
$eq: [
{
$size: "$hasRecovery"
},
0
]
}
}
},
{
$unset: "hasRecovery"
}
])You can test this solution onmongoplayground
|
What would be the best way to compare one attribute from two collections to find which ones are missing?I have two collections "items" and "items_recovery"
items look like this{
_id: ObjectId("64789354c78b2fd7db3ebea"),
item_name: 'BIKE_1032_HE_files/17/1_45',
...
}
db> db.items_recovery.find().count()
6174859
db> db.items.find().count()
6362241there are indices initem_nameWhat would be the best way to know which ones I am missing in the recovery collection using an aggregation pipeline?
|
MongoDB finding set differences
|
you declare string:job_params = "[string(name: 'app_one', value: '4.8.6'), string(name: 'app_three ', value: '1.2.4'), string(name: 'app_ten', value: '2.7.1')]"java.lang.ClassCastException: class org.jenkinsci.plugins.workflow.support.steps.build.BuildTriggerStep.setParameters() expects java.util.List<hudson.model.ParameterValue> but received class java.lang.StringParametersActionuse listdef List job_params = [ string(name: 'app_one', value: '4.8.6'), string(name: 'app_three ', value: '1.2.4'), string(name: 'app_ten', value: '2.7.1') ]
|
In jenkins pipeline I want to run "build job" using variables job_path and job_params.job_path = "long/path/to/different/jobs"job_params = [string(name: 'app_one', value: '4.8.6')]but more oftenjob_params = "[string(name: 'app_one', value: '4.8.6'), string(name: 'app_three ', value: '1.2.4'), string(name: 'app_ten', value: '2.7.1')]"pipeline {
stages {
stage('Deployment') {
steps {
script {
build job: "${job_path}", parameters: job_params
}
}
}
}
}Path works perfect, but param don’t.
I get an error that a list is expected and a string is provided.java.lang.ClassCastException: class org.jenkinsci.plugins.workflow.support.steps.build.BuildTriggerStep.setParameters() expects java.util.List<hudson.model.ParameterValue> but received class java.lang.Stringand second one :Also: org.jenkinsci.plugins.workflow.actions.ErrorAction$ErrorId: xyz
Caused: java.lang.IllegalArgumentException: Could not instantiate {job=long/path/to/different/jobs, parameters=[string(name: 'app_one', value: '4.8.6')]} for org.jenkinsci.plugins.workflow.support.steps.build.BuildTriggerStepI try cast this param variable to list,
Please advice me the best way to run job, pointed by path with random number of parameters.
|
Jenkins pipeline how to run build job with parameters from variables
|
Technically, thestatus.txtfile is stored in your Git repository. So if you want to change its state and be able to retrieve it next time, you should:modify its contentgit commit / push it to store its new contentBut functionally it doesn't seem a good idea to use your Git repo to store your latest GitLab CI pipeline status. Moreover, committing it would trigger another pipeline, which might be an unexpected side effect (note: you could add[skip ci]somewhere in the commit message to prevent the pipeline from executing, but we're probably too far from your original question here)...Maybe the best solution would be to use GitLab's APIs:get latest pipeline's execution(use sort andpaginationto retrieve only the latest one)get a pipeline's execution statusget a job's execution status
|
I am trying to update a file in my repository via job within the pipeline.my pipeline:stages:
- check_url
standardCheck:
stage: check_url
image: mcr.microsoft.com/powershell:latest
script:
- pwsh -File ./variable.ps1my script variable.ps1#read status from file, initial "true"
$status = Get-Content -Path "status.txt"
# reads "true"
Write-Host "current Status: $status"
# set content to "false", doesn't work!
Set-Content -Path ".\status.txt" -Value "false" -Force
# right location
Write-Host $(Get-Location)
# reading the new status "false" works as well
Write-Host "Status after setting: $(Get-Content -Path "status.txt")"my repo:gitlab-ci.ymlstatus.txtvariable.ps1I expected (that's my goal), the script would change the value of status.txt to "false". Not only within the job/pipeline. However I still have the value "true".How can I update the file on easy way?
I tried to use ci global variable, however i could not override the value permanently. When the Pipelines starts, so it takes the default value and not the updated one.My actually goal is, I need a status after executing job script, which can be accessed and overrode from same pipeline job at the next time. Depending on the status value, another processes are started.Why does my above code not work?Is there another way to achieve my goal to have a status of a value anytime in te gitlab instance?
|
GitLab CI/CD | How to Update a txt-File from a job within a pipeline
|
The image you're using does not havedockerinstalled. You need to either install docker as part of your image or specify animage:that already has docker installed, e.g.:unit_test_job:
image: dockerThe problem is that your Dockerfile you used for building your image probably doesn't work how you expect.The final image built from the Dockerfile you shared would be the same as if you simply a dockerfile like so:FROM gradleMulti-stage docker builds do not combine the images, as you seem to be expecting. Only the lastFROMstatement constitutes the final built image. All the otherFROMstatements in your Dockerfile effectively do nothing as-written.If you want dockerandgradle in the same image, you might do something like this:FROM gradle
RUN apt update && apt install -y docker.io
|
I made simple Dockerfile :FROM docker:latest AS docker
FROM gradle:7.4-jdk17 AS gradle
FROM docker AS final-docker
FROM gradle AS final-gradleI am using image made from it in my gitlab pipeline. My goal is to use a docker command to run another image inside one of stages (gradle and jdk is needed in all stages). This stage looks like this :unit_test_job:
tags:
- docker
- linux
services:
- docker:dind
variables:
DOCKER_TLS_CERTDIR: "/certs"
stage: unit_test
script:
- docker run --detach --name mycontainer -p 61616:61616 -p 8161:8161 --rm apache/activemq-artemis:latest-alpine
- gradle unitTest
cache:
key: "$CI_COMMIT_REF_NAME"
policy: pull
paths:
- build
- .gradlejdk and gradle commands are working correct but docker commands are not recognized:$ docker run --detach --name mycontainer -p 61616:61616 -p 8161:8161 --rm apache/activemq-artemis:latest-alpine
/usr/bin/bash: line 141: docker: command not found
Cleaning up project directory and file based variables
00:01
ERROR: Job failed: exit code 1I did a lot of research trying to find what can be wrong but cannot find anything. Please help me to resolve the case, explain why it doesn't work and correct these lines to make it work.
|
Running gitlab pipeline with jdk17, gradle and docker, docker command not found on test stage
|
Here is a oversimplified example based on your pipeline on how you can expose the cache as artifact, which works.stages:
- build
- output
- artifact
cache:
key: "$CI_COMMIT_REF_SLUG"
paths:
- dir1/*.txt
- dir2/*.txt
job A:
stage: build
script:
- mkdir dir1 dir2
- echo "build" > dir1/hello1.txt
job B:
stage: output
script:
- ls -l
- mkdir dir2
- echo "output" > dir2/hello2.txt
job C:
stage: artifact
script:
- ls -l
artifacts:
paths:
- dir1
- dir2
|
I do have 3 stages in my Gitlab runner. This is for an Android project. In the stage 1 i am taking build and in stage 2 i am just creating dummy text file. In stage 3 I artifact the cached files.In Job 1 I did cache the apk file and in Job 2 I did cache that dummy text file. The stage creates the caches files in cache/Gitname/ProjectName/cache.zip. Which is outside the project directory. But in stage 3 when I try to artifact I am getting error that I cannot access outside build folder.stages:
- build
- output
- artifact
cache:
key: "$CI_COMMIT_REF_SLUG"
paths:
- app/build/outputs/apk/debug/*.apk
- tmpDir/*.txt
build-job:
stage: build
script:
- echo "$CI_COMMIT_REF_SLUG"
- echo "Build in progress"
- time
- echo 'sdk.dir=/Users/Library/Android/sdk' > local.properties
- cat local.properties
- ./gradlew assembleDebug
- echo "Build Done"
- mkdir outputDir/
artifacts:
paths:
- "app/build/outputs/apk/debug/*.apk"
build-test:
stage: output
script:
- rm -rf tmpDir
- mkdir tmpDir
- pushd tmpDir
- pwd
- echo This is some text > myfile.txt
- cat myfile.txt
- popd
build-artifact:
stage: artifact
script:
- echo "Copying cache"
- pwd
artifacts:
paths:
- "../../../../../cache/GitUser/cicdtest/main-protected/*.zip"
|
How to artifact custom files that i added as cache in Gitlab CI?
|
Is there a way to resolve this without hardcoding the sleep?Typically, you pollingly wait until the server responds.- while ! curl http://localhost:80; do sleep 1; doneYou could code a little bit of bash to add a timeout, but I am super lazy and usetimeout:- timeout 60 bash -c 'while ! curl http://localhost:80; do sleep 1; done'or, not to be inside quotes, with exported function:- f() { while ! curl http://localhost:80; do sleep 1; done; }; export -f f; timeout 60 bash -c fReal life example from clickhouse.
|
I encountered a problem in my GitLab pipeline when running a container with an Apache service. It failed because Apache was not operational when the pipeline proceeded. I solved it by adding a 60-second sleep in the pipeline. Is there a way to resolve this without hardcoding the sleep?I attempted to address it using a health check in the Dockerfile, but it seems to be the wrong approach. How do you handle such situations?stage code :test_job:
stage: test
image: docker:20.10.16
services:
- docker:20.10.16-dind
tags:
- shell
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker run -d -p 80:80 --env APP_ENV=$APP_ENV --env APP_DEBUG=$APP_DEBUG --name $CONTAINER_NAME $CONTAINER_IMAGE_NAME
- sleep 60
- curl http://localhost:80
- docker stop $CONTAINER_NAME
- docker container rm $CONTAINER_NAMEPart of dockerfile where I'm trying to do what is described :CMD composer install && \
chmod -R 777 /var/www/html/config/autogenerated && \
chmod -R 777 /var/www/html/var && \
yarn install && \
yarn encore production && \
apache2-foreground
HEALTHCHECK --interval=10s --timeout=10s --retries=12 \
CMD curl -f http://localhost:80 || exit 1
|
Apache Service Startup Issue in GitLab Pipeline
|
Gitlab sends an automatic error when a script returns 1, which was the case because the scan had warnings. Adding the option -I fixed the error.
|
I am using a docker image from OWASP in my pipeline to scan my web app and produce a HTML report, and I am encountering a problem I've spent the whole day trying to solve.When running the scan job, it will successfully scan the website, but immediatly after executing the scan command, the job will stop and return "error: job failed: exit code 1".This happens without precisions on what has failed in the command.Here is the code of my job :zap_scan:
stage: owasp
image:
name: owasp/zap2docker-stable
script:
- mkdir /zap/wrk
- /zap/zap-baseline.py -t http://webURL.com -g gen.conf -r /zap/wrk/report.html
artifacts:
paths:
- /zap/wrk/report.htmlNormally, this shouldn't be returning an error as a I have tested the scan command by running this on a locally built, identic docker image and I have encountered no issues (ie the scan and the file were generated properly).Here is the open source code of the zap-baseline.py scriptBy looking into this, I've found that the script can return error 1 if fail_count is different than 0.I do not understand why the script behave differently on a local docker image and in a pipeline, can you help me please ?
|
OWASP ZAP baseline scan returns unexpected error 1 in CI/CD pipeline
|
I resolved the issue,
Thank you all$year=$(Get-Date -Format yy)
$date=$(Get-Date -Format yyyyMMdd)
$week = "{0:d1}" -f ($(Get-Culture).Calendar.GetWeekOfYear((Get-Date),[System.Globalization.CalendarWeekRule]::FirstFourDayWeek, [DayOfWeek]::Monday))
Write-Host "##vso[build.updatebuildnumber]$year.1.$week+$date"
|
How to add current week number in Pipeline Build name?tried -> $(Date:ww) #its not system in build function2)Tried ->
$week = "{0:d1}" -f ($(Get-Culture).Calendar.GetWeekOfYear((Get-Date),[System.Globalization.CalendarWeekRule]::FirstFourDayWeek, [DayOfWeek]::Monday))Write-Host "##vso[task.setvariable variable=$($WeekNumber)]$($week )"
|
DeVOps Build Format Name
|
Therun outcomeshould be identical with theTeststab on pipeline result page. Check if there could more than one runs in the tests(2 runs in sample screenshot below).To count the test outcome, the rest apiRuns - Get Test Run Statisticscould be better, it shows thecountforeach rundirectly.GET https://dev.azure.com/{organization}/{project}/_apis/test/runs/{runId}/Statistics?api-version=7.1-preview.3Another option is to parse the test result file, you can find my answer in linkhere.
|
After completing a pipeline job, I have the option to access the "Tests" section within the pipeline job details. Within this section, I can view the count of tests categorized as "passed," "failed," and "other."My objective is to create a script that retrieves these counts. To achieve this, I've attempted to use the following endpoint:https://dev.azure.com/{ORGANIZATION}/{PROJECT}/_apis/test/runs/{runId}/results?api-version=6.1-preview.6However, when I sum the numbers for the outcomes labeled as "passed," "failed," and "other," the total does not match the counts displayed in the Azure DevOps user interface.I'm uncertain if I'm using the correct endpoint or if there's an alternative method to retrieve these test counts.
|
How to retrieve the number of passed, failed unit-tests of a pipeline-job from AzureDevops using a script
|
Using the Parquet file as a source gives you some flexibility, you can use parameters to query directly one file, or query only the files created since the last load.
Also, if you need to change something on the source, you don't have to go and change the external table too.However, in your second question, It's usually much better to filter the data in advance, instead of bringing all the data to data flow and filtering it there.The best option would be, if applicable, to filter the files in a data flow source by using the files create date or reading specific folder.
|
In a pipeline, in a Data Flow, I can use as a source:the parquet fileor a sql query to an external table on the serverless sql, that points to the parquet fileAssuming that the sink is another parquet file, which of the two options is better?is the first option more direct?On the other hand,
what if I'd put a "filter sequence" between the "parquet source" and sink:is that betterthan filtering with "where" clause in the "serverless sql query source" (the second option)?that is :"parquet source > filter > parquet sink"is better than "select...WHERE... source > parquet sink" ???EDIT: the real question behind is about Spark Engine vs SQL Engine
|
Direct Parquet source (Spark Engine) vs "SQL-Select external table (pointing to that Parquet) on Serverless SQL" source (SQL Engine)
|
All looks fine here, seems that all you need is some patience. Its hard to say anything more without.nextflow.log.It might take hours depending on file size. Most likely process is running. Apart from log, you can open the second terminal and have a look at the CPU/mem usage (use e.g.toporhtop). You will see lot of CPU usage with BWA.Channels should be fine (otherwise you will not have[0%] 0 of 1or end up with error). Use.view()to verifyAlso, make sure resources are optimized. Adjustmemoryand most importantlycpus- seedocumentation.You can alsocdto workdir of this process (something like/work/b9/c05177...), check logs, see if your expected files are there, check exact command executed, try to run it "outside" of the pipeline
|
I'm on a Linux virtual-machine and I have installed Nextflow, created directories for the input files: index.ref and fastq and create a pipeline.nf script.Here below the code:#!bin/env nextflow
params.index_dir="/mountpoint/nextflow_container/BWAIndex"
params.ref="/mountpoint/nextflow_container/BWAIndex/human_g1k_v37_decoy.fasta"
params.fastq="/mountpoint/nextflow_container/fastq/ALL_*_{1,2}*"
process mapping {
input:
path index_dir
val ref
tuple val(sample_id), path(fastq)
output:
path "${sample_id}.sorted.bam"
script:
"""
bwa mem -t 2 ${ref} ${fastq} -o ${sample_id}.bam
"""
}
workflow {
index_ch=Channel.fromPath(params.index_dir)
ref_ch=Channel.of(params.ref)
fastq_ch=Channel.fromFilePairs(params.fastq)
mapping(index_ch, ref_ch, fastq_ch)
mapping.out.view()
}The results prints out:
N E X T F L O W ~ version 23.04.4
Launchingpipeline.nf[distracted_dijkstra] DSL2 - revision: 16e98f914c
executor > local (1)
[b9/c05177] process > mapping (1) [ 0%] 0 of 1And it doesn't go on.Someone knows why this happen?
|
Mapping processing doesn't work using Nextflow scripting language
|
Use a cache for dependencies which will prevent the pipeline from having to download dependencies from scratch each time it runs. You can use the actions/cache@v3Try different runner, if possibleUse a parallel runner. This will allow the pipeline to run multiple jobs at the same time.
|
I am trying to improve the performance of my pipeline, but I'm experiencing significant delays during the restore stage. It currently takes approximately 4 minutes for both the restore and build stages to complete.Here's an example of my pipeline:build:
stage: build
tags:
- globalrunner
script:
- mkdir src
- dotnet restore "example.csproj"
- dotnet restore "example.csproj"
- echo "Restore success"
- dotnet build "example.csproj" -c Release -o /app/build
- dotnet build "example.csproj" -c Release -o /app/build
only:
- tags
build-artifacts:
stage: build-artifacts
artifacts:
paths:
- ./app/publish/
script:
- dotnet publish "example.csproj" -c Release -o ./app/publish
- dotnet publish "example.csproj" -c Release -o ./app/publish
|
Improving GitLab Pipeline Performance
|
Jenkins passes environment variables to scripts "by value", not "by reference", conceptually speaking. If you want to persist any state betweenshsteps (doesn't matter inside the same stage or different stages) you have two options:Pass the state information back to Jenkins (serialize to stdout and capture it in Jenkins), but you sacrifice declarative puritySave it in some file in the workspace, which is not such a good idea generally speaking, but acceptable in very simple casesPick your poison.
|
I have following pipeline (content simplified, but the structure is exact):pipeline {
agent any
environment {
my_var = ""
}
stages {
stage('Stage_parallel'){
parallel{
stage('Stage_1') {
steps {
...
}
}
stage('Stage_2'){
steps {
withCredentials([usernamePassword(<LOGIN>, <PASS>)])
{
sh'''
...
var_1 = "/path/to/folder"
var_2 = "file_name"
my_var = "${var_1}/${var_2}"
echo ${my_var} ### returns correct value
'''
}}}}}
stage('Stage_3'){
steps {
sh 'echo ${my_var}' ### returns ""
}}}}So I tried to declare global variable, update it inStage_2and use inStage_3. However, it returns correct value only inStage_2shblock, but not outside... I also tried todefine variable outside thepipelineblock likedef my_vardefine it inStage_2asenv.my_var = "${var_1}/${var_2}"and use inStage_3as${env.my_var}but none of approaches allows to getmy_varvalue fromStage_2... So how should I fix the pipeline?
|
How to share variables between stages in declarative pipeline
|
You must set theclone_urlconfiguration propertyin your runnerconfig.tomlto use the appropriate host/port. This allows you to have a different URL for cloning repositories from the URL used for contacting GitLab to obtain jobs.
|
There is a way?There is a way?a host on internet with gitlab-runner
101.987.654.321gitlab repository with dns name git.ideep.apasaico.local that forwarded to 123.456.789.101:82 valid ipgitlab-runner hosts file
123.456.789.101 git.ideep.apasaico.localmy shell register runner command
sudo gitlab-runner register --urlhttp://git.ideep.apasaico.local:82/--registration-token $REGISTRATION_TOKENso gitlab-runner are registered and online in gitlab, but because my gitlab external url is on port 80 , when i run job on pipeline , that say repository with port 80 not found.i need a solution for this
|
separate external url for each gitlab-runners
|
You might try adding the following line in your csproj file:<UseWPF>true</UseWPF>This line instructs the build system to incorporate the WPF targets, which are necessary to create XAML files.I have build the XAML WPF app successfully, Refer below:-My yaml code to build XAML WPF app withdotnet build:-trigger:
- main
pool:
vmImage: 'windows-latest'
variables:
solution: '**/*.sln'
buildPlatform: 'Any CPU'
buildConfiguration: 'Release'
steps:
- task: UseDotNet@2
inputs:
packageType: 'sdk'
version: '3.x'
- task: DotNetCoreCLI@2
inputs:
command: 'build'
projects: '$(solution)'
arguments: '--configuration $(buildConfiguration)'Output:-Also, As an alternative,Instead of dotnetbuild try using VsBuild to build your pipelinelike below:-trigger:
- master
pool:
vmImage: 'windows-latest'
variables:
solution: '**/*.sln'
buildPlatform: 'Any CPU'
buildConfiguration: 'Release'
steps:
- task: NuGetToolInstaller@1
- task: NuGetCommand@2
inputs:
restoreSolution: '$(solution)'
- task: VSBuild@1
inputs:
solution: '$(solution)'
platform: '$(buildPlatform)'
configuration: '$(buildConfiguration)'
- task: VSTest@2
inputs:
platform: '$(buildPlatform)'
configuration: '$(buildConfiguration)'Output:-
|
I have an Azure DevOps build for a new class library project which is building correctly (.NET Framework 4.6.2).I have added some XAML files from another project via Visual Studio and it displayed an error "InitializeComponent does not exist in the current context". I found that if I changed the XAML file property Build Action to Page that this resolved the issue in Visual Studio and it built fine (this setting was on the original files but is lost when you copy the files over to a new project).The issue now is that I am getting the same error in the DevOps build on DotNet Build.Are there additional steps required when building a project that contains XAML files? Or are there any issues adding XAML files to a class library?
|
Azure DevOps build fails for XAML files
|
Depending on your exact use case,Dagster Pipesanddagster-k8smay solve your problem. According to them, it isa protocol for integrating and launching compute into remote execution environments from Dagster and a toolkit for building those integrations.In your example, it would look like this:@asset
def assets_def(
context: AssetExecutionContext,
pipes_k8s_client: PipesK8sClient,
) -> MaterializeResult:
return pipes_k8s_client.run(
image="some_image", context=context
).get_materialize_result()
|
I am trying to understand what I can really do with Dagster.I have some Python codes already containerised and pushed to a container registry.
I was wondering if in a Dagster'op or asset I can read those images and run the final pipeline on my Kubernetes cluster.For example, I may have an image that performs some multiplication and it's on GCR. Does Dagster offer any tool for pulling this image and then allowing me to do something like:@asset(PULL_IMAGE)
def my_asset():
from MY_IMAGE_CODE import function_1, function_2
function_1()Then, I would like to use Dagster-kubernetes to run this pipeline on Kubernetes.I tried to get an idea from Dagster docs but I couldn't find anything.
I had a look at various GitHub Repos but many of them are using an old version of Dagster.
I jumped intohttps://docs.dagster.io/_apidocs/libraries/dagster-k8sandhttps://dagster.io/integrations/dagster-dockerbut I am not really understanding how to link them.
|
Is it possible to run an asset from an image?
|
There is no direct tool to convert the ssis into ADF Pipelines as ADF as a tool is still under development and doesn't have all the features as that of ssis till date.
You would have to understand your current ssis flow and manually design the ADF pipelines based on functionality and insome cases integrate other Azure offerings like Azure function, Azure Batch etc to write some custom logics similar to script task in ssis.
|
Hi there i have SSIS packages and i want to move jobs to Azure Data Factory pipelines.
I dont mean execute SSIS package. Im looking for a easy way to move packages activities to adf pipeline. All helps are welcome.Regards
|
How to Convert SSIS Package xml to Azure Data Factory xml Format?
|
Create a PAT in Azure Devops and ensure it hasBuild- Read & execute permissions:Then get this PAT and Base64 encode it, using:$MyPat = 'yourPAT'
$B64Pat = [Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes("`:$MyPat"))You can then pass this into your REST API headers like:headers = {
"Authorization": "Basic <base64-encode-pat>"
}It should work after this.
|
As I see in the last version of documentationhttps://learn.microsoft.com/en-us/rest/api/azure/devops/pipelines/runs/run-pipeline?view=azure-devops-rest-7.2To using this APIPOST https://dev.azure.com/{organization}/{project}/_apis/pipelines/{pipelineId}/runs?api-version=7.2-preview.1I need oauth2Authorization URL: https://app.vssps.visualstudio.com/oauth2/authorize&response_type=Assertion Token URL: https://app.vssps.visualstudio.com/oauth2/token?client_assertion_type=urn:ietf:params:oauth:client-assertion-type:jwt-bearer&grant_type=urn:ietf:params:oauth:grant-type:jwt-bearerHow to use this?I know that Azure has several authorization methods, a large number of options on how to use them.And to be honest, I am completely confused by them.It is not clear how exactly to use them, when, what access to set for the user and what to send in requests?If I try to generate token like thisGET https://login.microsoftonline.com/0000000-0888-00000-be70-0000000/oauth2/v2.0/token Content-Type: application/x-www-form-urlencoded Content-Length: 229 grant_type=password&username=[user]&password=[pass]&client_id=[clientID]&client_secret=[clientSecret]&scope=https%3A%2F%2Fgraph.microsoft.com%2F.defaultI successfully get Baerer token. But when I using this token inPOST https://dev.azure.com/{organization}/{project}/_apis/pipelines/{pipelineId}/runs?api-version=7.2-preview.1I have 203 status and of course my pipeline is not running
|
Run pipeline using Azure REST API
|
You don't refer the output variables by using$(myTaskDisplayName.myVar), instead use$(myTaskName.myVar).trigger: none
pool:
vmImage: windows-latest
steps:
- script: |
set ARTEFACTNAME=lorem.txt
echo ##vso[task.setvariable variable=psJobVariable;isOutput=true]%ARTEFACTNAME%
echo "lorem ipsum dolor" > %ARTEFACTNAME%
displayName: 'compile_windows'
name: 'compile_windows'
- task: PublishBuildArtifacts@1
inputs:
PathtoPublish: '$(Build.SourcesDirectory)/$(compile_windows.psJobVariable)'
ArtifactName: 'ci_test_windows'
publishLocation: 'Container'
displayName: 'publish windows'EDIT - 11/12/2023Example of working pipeline
|
Hi I have a small snippet to run on Azure devops.It is from an larger project using the "windows-latest"Runs asscriptaka cmd.exe batsteps:
- script: |
set ARTEFACTNAME=lorem.txt
echo ##vso[task.setvariable variable=psJobVariable;isOutput=true]%ARTEFACTNAME%
echo "lorem ipsum dolor" > %ARTEFACTNAME%
displayName: 'compile_windows'
- task: PublishBuildArtifacts@1
inputs:
PathtoPublish: '$(Build.SourcesDirectory)/$(compile_windows.psJobVariable)'
ArtifactName: 'ci_test_windows'
publishLocation: 'Container'
displayName: 'publish windows'The error from the publish is:##[error]Publishing build artifacts failed with an error: Not found PathtoPublish: D:\a\1\s\$(compile_windows.psJobVariable)means this variable is not setI can confirm it creates the lorem.txt file with the content but ist does not create the PipeLine variable "psJobVariable" for the Publish step?
What needs to be changed to make this work?I tried several " around the "echo ##vso[..." line but no help.I am expecting this variable to be set to lorem.txt
|
Azure Devops Pipeline howto export variable dynamically in cmd.exe script
|
this works for me:
In the job config activate "Permission to Copy Artifact" "Projects to allow copy artifacts" : *In the job config, Post build actions, Archive the artifacts, Files to archive: result.htmlIn the pipeline script:stage("foo-stage") {
steps {
script {
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE', catchInterruptions: false) {
timeout(time: 5, unit: 'MINUTES') {
String jobName = "foo-job"
String artifactsFilter = "*.html"
result = build job: jobName
copyArtifacts filter: artifactsFilter, projectName: jobName , selector: specific("${result.number}")
archiveArtifacts artifacts: artifactsFilter
}
}
}
}
}
|
I run a Jenkins pipeline which executes jobs on a (always the same) virtual machine usingagent.jar.These jobs produce artifacts on theVM-workspacedirectory and these files are successfully transferred to Jenkins.Now I want the pipeline to retrieve data from thatVM-workspacetoo.
How can I configure the "archiveArtifactsto access my VM?
|
Poststep archiveArtifacts: how to retrieve files from other machine?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.