Response
stringlengths 8
2k
| Instruction
stringlengths 18
2k
| Prompt
stringlengths 14
160
|
---|---|---|
Aswrapis a step, we have to call it from astepsorscriptcontext. Only the latter allows us to create nested stages inside of awrapblock. Try this:pipeline {
agent any
stages {
stage('Build the context') {
steps {
echo 'No Xvfb here'
}
}
stage('Test Start') {
steps {
script {
wrap([$class: 'Xvfb', screen: '1920x1080x24']) {
stage('Test Suite 1') {
echo 'Use Xvfb here'
}
stage('Test Suite 2') {
echo 'Use Xvfb here'
}
}
}
}
}
//...
}
}The extra stage "Test Start" may look a bit ugly, but it works.Note: There are nostepsblocks required in the nested test stages, because we are already inside ascriptblock, so the same rules as in scripted pipeline apply.ShareFolloweditedMar 12, 2020 at 21:36answeredMar 12, 2020 at 21:23zett42zett4226.3k33 gold badges3939 silver badges8181 bronze badgesAdd a comment|
|
In my declarative pipeline I have several stages where Xvfb is not required and several testing stages, where it is.Is it possible to define aJenkins wrapperones for several stages? Something like this:pipeline {
agent any
stages {
stage('Build the context') {
steps {
echo 'No Xvfb here'
}
}
wrap([$class: 'Xvfb', screen: '1920x1080x24']) {
stage('Test Suite 1') {
steps {
echo 'Use Xvfb here'
}
}
stage('Test Suite 2') {
steps {
echo 'Use Xvfb here'
}
}
}
stage('cleanup') {
steps {
echo 'No Xvfb here'
}
}
}I'm getting compilation errors wherever I put the wrap block for several stages:WorkflowScript: 10: Expected a stage @ line 10, column 17.
wrap([$class: 'Xvfb', screen: '1920x1080x24'])
|
Wrap several stages in Jenkins pipeline
|
You should be able to useonly:with thevariables:option. I ditn't test this, but please try adding this a job:only:
variables:
- $CI_MERGE_REQUEST_EVENT_TYPE == "detach"See the documentation for more details:https://docs.gitlab.com/ee/ci/yaml/README.html#onlyvariablesexceptvariablesShareFollowansweredMar 6, 2020 at 11:09FaXilifreshFaXilifresh55866 silver badges77 bronze badges11GitLab documentation recommends usingrulesinstead ofonly.–mbomb007Nov 17, 2021 at 15:34Add a comment|
|
I am trying to run a pipeline when a merge request in gitlab is detach means that is not a success and is rejected or closed without merging into the branch.
So is it possible to execute it only for CI_MERGE_REQUEST_EVENT_TYPE=detach as we execute for particular branches?
|
In GitLab Pipelines is there a way to run a pipeline when a Merge Request is detached?
|
I'm going to go ahead and post my own answer to this, in case anyone else ever finds themselves pulling out their hair trying to resolve this. Everything will hint that you can do this, but there's no real, practical way to.If you want to allow mods or additional content to be dropped into your game after the fact, you have to design your own content management system. It's easy with something like DotNetZip and a little out-of-the-box thinking, and despite my initial fears, if it's not faster than the MonoGame content system, it's certainly no slower.ShareFollowansweredJul 21, 2019 at 9:28CerezaCereza16377 bronze badges11The MonoGame Content Pipeline essentially processes content files from their raw format into cross platform XNB files. This really comes down to the type of content you're loading. For example, loading a WAV file on a Windows Desktop could be quite different than doing it on an Android mobile device. That said, I agree that the MGCB is not really adding a whole lot of value most of the time. If you can figure out a way to make your game without it go for it.–craftworkgamesJul 22, 2019 at 1:33Add a comment|
|
I haven't found a clear-cut answer for if I can do this nor how to go about it. I'd like to allow my game to load additional MGCB into content managers, thus allowing additional content to be made after I release my game.I've seen talk of building them on the fly, but nothing about loading them nor using them. If I missed this question being asked and answered elsewhere, please feel free to correct me.My initial solution was to just not use the MGCB content pipeline at all, and replacing it with an uncompressed zip structure similar to the classic "pak" concept games have used in the past. I fear inefficiency with that. I'd like to use the "native" content pipeline system if I can.
|
Loading an MGCB at runtime in MonoGame?
|
You can't run a pipeline or a complete job with a local repository, only a task. But that's OK, as a job main goal is to setup inputs and outputs for a task, and you will be providing them locallyThe command isfly execute, and the complete doc is here :https://concourse-ci.org/tasks.html#running-tasksTo run a tasks locally you will have to have the task in a separate yaml file, not inline in your pipeline.The basic command where you run the taskrun-tests.ymlwith the inputrepositoryset to the current directory:fly -t my_target execute --config run-tests.yml --input repository=.ShareFolloweditedDec 23, 2020 at 12:31answeredMay 29, 2019 at 13:44Francois MockersFrancois Mockers72344 silver badges1414 bronze badges1the link is broken sir–HatikDec 23, 2020 at 10:01Add a comment|
|
I am trying to connect local git repository to concourse so that i can perform automated testing on my local environment even before committing the code to GitRepo. In other terms i want to perform some tasks before git commit using concourse pipeline for which i want to mount my local working directory to concourse pipeline jobs.
|
How to mount local directory to concourse pipeline job?
|
you can usepurrr::pluck()mtcars$mpg %>% boxplot() %>% purrr::pluck("out")depending on the output type of the function, you can also use[mtcars$mpg %>% boxplot() %>% .[["out"]]ShareFollowansweredMar 15, 2019 at 16:31WilWil3,13822 gold badges1414 silver badges3232 bronze badges0Add a comment|
|
Fromthis tutorial, I understand that pipes "take the output of one statement and make it the input of the next statement."Can I select apieceof an output and make it an input of the next statement?
For example, I'd like to know where the outliers are in this dataset :mtcars$mpg %>% boxplot()$outliers %>% whichThanks!
|
How to use a piece of an object using a pipe
|
Sklearn'spipelinewill applytransformer.fit_transform()whenpipeline.fit()is called andtransformer.transform()whenpipeline.predict()is called. So for your case,StandardScalerwill be fitted toX_trainand then themeanandstdevfromX_trainwill be used to scaleX_test.The transform ofX_trainwould indeed look different to that ofX_trainandX_test. The extent of the difference would depend on the extent of the difference in the distributions betweenX_trainandX_testcombined. However, if randomly partitioned from the same original dataset, and of a reasonable size, the distributions ofX_trainandX_testwill probably be similar.Regardless, it is important to treatX_testas though it is out of sample, in order for it to be a (hopefully) reliable metric for unseen data. Since you don't know the distribution of unseen data, you should pretend you don't know the distribution ofX_test, including themeanandstdev.ShareFolloweditedJan 4, 2019 at 17:05answeredJan 4, 2019 at 16:59ChrisChris1,6381313 silver badges2121 bronze badges21Very happy to hear that, that makes perfect sense. Thank you so much for the explanation Chris!!–TartagliaJan 4, 2019 at 19:55@Tartaglia glad to be able to help.–ChrisJan 4, 2019 at 20:20Add a comment|
|
I am using Standardscaler to normalize my dataset, that is I turn each feature into a z-score, by subtracting the mean and dividing by the Std.I would like to use Standardscaler within sklearn's pipeline and I am wondering how exactly the transformation is applied to X_test. That is, in the code below, when I runpipeline.predict(X_test), it is my understanding that theStandardScalerandSVC()is run on X_test, but what exactly doesStandardscaleruse as the mean and the StD? The ones from theX_Trainor does it compute those only forX_test? What if, for instanceX_testconsists only of 2 variables, the normalization would look a lot different than if I had normalizedX_trainandX_testaltogether, right?steps = [('scaler', StandardScaler()),
('model',SVC())]
pipeline = Pipeline(steps)
pipeline.fit(X_train,y_train)
y_pred = pipeline.predict(X_test)
|
Using Standardization in sklearn pipeline
|
You were correct -(k-1)*t+n*tis the time for executingncommand in pipeline.You should think of it as follow:In the first(k-1)cycle (t) the pipe is filling up. After that time, 0 commends has been fully execute but all the pipe is filled.From now on, every cycletyou will have new command who finished to execute (because of the pipeline effect) -> therefor, then*t.In total, after(k-1)*t + n*tis the time for executencommand in pipeline.Hope that make it more clear!ShareFollowansweredNov 7, 2018 at 12:12dWinderdWinder11.6k33 gold badges2525 silver badges4040 bronze badgesAdd a comment|
|
I roughly (abstractly) understand why pipeline isktimes faster than non-pipelined one (like this way):K stage pipeline dividing the circuit into k parts.Each stage has the same transistor delay (Ideally)So it is K times faster.(like using conveyor belt system on car factory)But I cannot understand this mathematical expression:clock cycle time = t
number of command = n
speedup = (n*k*t)/((k-1)*t+n*t) = (n*k*t)/(k*t+(n-1)*t)
if n -> infinite: speedup is kWhat I don't know is: What ((k-1)t+nt) means?I can just understand(nkt)means non-pipelined time, so I believe((k-1)*t+n*t)should be the pipedlined time.But, why((k-1)*t+n*t)is pipelined time?
|
Proving that a k stage pipeline can be at most k times faster than that of a nonpipelined one
|
You can do:DT <- datatable(data = datasetInput3(), options = list(pageLength = 25))
for(i in 1:25){
DT <- DT %>%
formatStyle(colnames(entities)[i],
backgroundColor = styleEqual(vec1[i], c('yellow')))
}
DTShareFollowansweredJun 19, 2018 at 7:17Stéphane LaurentStéphane Laurent81.4k1616 gold badges129129 silver badges237237 bronze badgesAdd a comment|
|
Guys.I met a problem that:I am developing a R shiny app and I want to highlight some values in the datatable.I have a dataframe(entities) and a vector(vec1), and I want to highlight a specific value in each column when the value in each column is equal to the value in a vec1.Now I achieved it by repeating the formatStyle code 25 times, but I believe it can be done by writing a loop. Could anyone help me?vec1 = c(1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25)
datatable(data = datasetInput3(), options = list(pageLength = 25)) %>% formatStyle(
colnames(entities)[1],
backgroundColor = styleEqual(vec1[1], c('yellow'))) %>% formatStyle(
colnames(entities)[2],
backgroundColor = styleEqual(vec1[2], c('yellow')))
...
%>% formatStyle(
colnames(entities)[25],
backgroundColor = styleEqual(vec1[25], c('yellow')))
})
|
Write a loop of formatStyle in R shiny
|
As far as I know, there is no ready to use plugin or integration for gitlab ci and jira. You can do this setup by changing your .gitlab-ci.yml file and using jira's REST API.Create another job in your stage in .gitlab-ci.yml file and make it run with condition:when: on_failure.https://docs.gitlab.com/ee/ci/yaml/#whenMake api call using curl to create new issue on your jire project.https://developer.atlassian.com/server/jira/platform/rest-apis/ShareFollowansweredJul 24, 2018 at 5:34Sardorjon VakkosovSardorjon Vakkosov26033 silver badges1212 bronze badgesAdd a comment|
|
I can't find nothing about creating jira tasks from gitlab ci
I created them from Jenkins by this way:def epicIssueFields = [fields: [project : [key: "$PROJECT_ABBR"],
description : 'New JIRA Created from Jenkins.',
customfield_10007: "Это Epic Name - $BUILD_NUMBER", //epic name
customfield_10100: [id: '10100'],
summary : "Лучшая тема в мире", //тема
issuetype : [name: 'Epic']
]]
stage('Creating JIRA EPIC') {
def epicIssue = jiraNewIssue issue: epicIssueFields, site: 'TEST_JIRA'
EPIC_NUMBER = epicIssue.data.key
echo EPIC_NUMBER
}How can I do it from gitlab ci?Are there some specific commands for this intro GitLab
|
How to create Jira task from GitLab CI?
|
You can usequeueto schedule a build of a job. Seehttps://github.com/jenkinsci/job-dsl-plugin/wiki/Job-DSL-Commands#queuepipelineJob('example') {
// ...
}
queue('example')ShareFollowansweredAug 30, 2017 at 18:27daspilkerdaspilker8,16411 gold badge3535 silver badges4949 bronze badges2Thank you very much for your answer. I also discussed with my architecture level colleagues and they also proposed me to use queue and it actually works and what i exactly looking for solution.–softpowerSep 4, 2017 at 13:57Can you use queue to only execute "scan multibranch pipeline" instead of actually running the job?–red888Jul 2, 2018 at 20:21Add a comment|
|
I have a trouble about running initial pipeline from Job Dsl.Is there way to perform/run initial pipeline automatically after it is created.I am creating dynamic pipeline for each feature branch. So In each time when the feature pipeline created, I want to run it automatically after pipeline and its job creation completed.I don't want to perform pipeline manually from Jenkins. Because it is too tricky to perform each created feature pipeline.
|
How can I perform/run Initial Pipeline from Groovy based Job DSL
|
Seems like you're looking for asemi_join:a %>% filter(var1 == 1) %>% semi_join(b, ., by = "id")
# id var2
# 1 2 0.8283845
# 2 2 -0.5286006semi_joinreturn all rows from x where there are matching values in y,
keeping just columns from x.A semi join differs from an inner join because an inner join will
return one row of x for each matching row of y, where a semi join will
never duplicate rows of x.ShareFollowansweredDec 6, 2016 at 12:48talattalat69.6k2121 gold badges128128 silver badges157157 bronze badges2dplyrjust keeps on giving. I would add to your answer thatsemi_joincomes fromdplyr.–Paul HiemstraDec 7, 2016 at 12:47Thank's, I figured it out but I'd like to know the answer for question anyway.–peter_cDec 7, 2016 at 13:18Add a comment|
|
I'm workinng in R with two tables connected by id variable (and for some reasons I don't want to merge them). The exemplary objects are presented below:a <- data.frame(id=c(1L,2L,3L),
var1=c(0,1,3))
b <- data.frame(id=c(1L,1L,2L,2L,3L,3L),
var2=rnorm(6))What I want to do is find rows in first database with respect to given condition on var1, select only Ids and then use these id values to filter observation in database 2. I wonder if I can do this in one pipeline, as follows:a %>%
filter(var1==1) %>%
select(id) %>%
filter(b,id==.)Ora %>%
filter(var1==1) %>%
select(id) %>% c() %>% unlist()
filter(b,id==.)Both examples doesn't work probably because I can pass only data.frames or other objects through pipeline operator and not atomic values. Am I correct?
|
Passing value in pipeline operator
|
If you explained the general use case a bit more, it might be easier to point you in the right direction. Do you truly need a vector as the output (i.e random access capabilities)? Or do you just want a way to transform data from one source and put it elsewhere?If you don't actually need the random access of the vector I would say usingcore.asyncand channels is the best way to do this. This way you can have specific threads or go loops handling each step of the data publishing, transformation, and consumption solely by placing a channel in between them.Here are a few good articles to learn about core.async from (but there are many more out there if you do a quick google):http://www.braveclojure.com/core-async/http://clojure.com/blog/2013/06/28/clojure-core-async-channels.htmlhttps://tbaldridge.pivotshare.com/- Check out the core.async specific videos. They're awesome and go really in depth.Hopefully that helps!ShareFollowansweredJun 6, 2016 at 0:47Jarred HumphreyJarred Humphrey64844 silver badges88 bronze badges1I do not necessarily need vectors; depending on the data and the particular case my collections will actually more likely be maps or lists. But I'm in all cases looking only at tail values so channels are very much relevant and I'm definitely considering them.–DomchiJun 6, 2016 at 4:20Add a comment|
|
Say I have a vector in Clojure:(def myvec (atom (vector 1)))And I have this function that adds new values to the vector:(defn inc-myvec! []
(swap! myvec conj (inc (last @myvec))))Assume that this function gets called from time to time on some trigger. Let's trigger it a few times.(dotimes [i 5] (inc-myvec!))
@myvec
;=> [1 2 3 4 5 6]Now, if I want to subtract 1 from every element in that vector I would do something like this:(def myvec2 (atom (mapv dec @myvec)))
@myvec2
;=> [0 1 2 3 4 5]This is of course not great as I have to do this on every change of myvec. It is also wasteful as it does the whole vector on every change. We can do better. Let's implement observer pattern and calculate only the last value which gets appended:(defn watcher [_ _ _ new-myvec]
(swap! myvec2 conj (dec (last new-myvec))))
(add-watch myvec :watcher watcher)This is not bad. But ideally I'd want the watcher to operate in another thread for starters. Are there are better, more idiomatic ways of dealing with this kind of transformations where you basically need to react to incoming data stream as it arrives and transform it into something else?
|
Idiomatic way to transform a growing vector in Clojure
|
You cannot edit fromstdinwhen using--remote.:h --remote--remote [+{cmd}] {file} ...
Open the file list in a remote Vim. When
there is no Vim server, execute locally.
There is one optional init command: +{cmd}.
This must be an Ex command that can be
followed by "|".
The rest of the command line is taken as the
file list. Thus any non-file arguments must
come before this.You cannot edit stdin this way |--|.The remote Vim is raised. If you don't want
this use >
vim --remote-send "<C-\><C-N>:n filename<CR>"--remote-silent [+{cmd}] {file} ...As above, but don't complain if there is no
server and the file is edited locally.--remote-tab-silent Like --remote-silent but open each file in a
new tabpage.ShareFollowansweredDec 29, 2015 at 20:25sidyllsidyll58.4k1414 gold badges110110 silver badges152152 bronze badgesAdd a comment|
|
The following command successfully launches vim that reads the edit buffer from standard input.echo hi | vim -But this one does not work.echo hi | vim --remote-tab-silent -When the above command is run, the following warning occurs and vim quits.Vim: Warning: Input is not from a terminal
Vim: Error reading input, exiting...
Vim: preserving files...
Vim: Finished.Why does it not read from standard input in the second case?The help message of vim seems to indicate that it should have worked?$ vim -h | head
VIM - Vi IMproved 7.4 (2013 Aug 10, compiled Mar 31 2015 23:36:07)
usage: vim [arguments] [file ..] edit specified file(s)
or: vim [arguments] - read text from stdin
or: vim [arguments] -t tag edit file where tag is defined
or: vim [arguments] -q [errorfile] edit file with first error
Arguments:
-- Only file names after this
-g Run using GUI (like "gvim")
$ vim -h | grep remote
--remote <files> Edit <files> in a Vim server if possible
--remote-silent <files> Same, don't complain if there is no server
--remote-wait <files> As --remote but wait for files to have been edited
--remote-wait-silent <files> Same, don't complain if there is no server
--remote-tab[-wait][-silent] <files> As --remote but use tab page per file
--remote-send <keys> Send <keys> to a Vim server and exit
--remote-expr <expr> Evaluate <expr> in a Vim server and print result
|
How to make read buffer from standard input when run with --remote-tab-silent option?
|
I tried several things, but in the end @Etan Reisner proved to me (unintentionally) that even if there is a way to do what you asked (clever, Etan), it's not what you actually want. If you want to be sure to read the numbers back sequentially then the reads have to be serialized, which the commands in a pipeline are not.Indeed, that applies to your original approach as well, since command substitutions are performed in subshells. I think you could reliably do it like this, though:debugum=1
eval "
outputStuff |
tee debugFile.$((debugNum++)) |
filterStuff |
transformStuff |
doMoreStuff |
tee debugFile.$((debugNum++)) |
endStuff > endFile
"That way all the substitutions are performed by the parent shell, on the string, before any of the commands is launched.
|
I ran into a situation where I was doing:outputStuff |
filterStuff |
transformStuff |
doMoreStuff |
endStuff > endFileI want to be able to insert some debug tracing stuff in a fashion like:tee debugFile.$((debugNum++)) |but obviously the pipes create subshells, so I wanted to do this instead.exec 5< <(seq 1 100)
outputStuff |
tee debugFile.$(read -u 5;echo $REPLY;) |
filterStuff |
tee debugFile.$(read -u 5;echo $REPLY;) |
transformStuff |
tee debugFile.$(read -u 5;echo $REPLY;) |
doMoreStuff |
endStuff > endFileie, I want the debug line I insert to be identical, so I don't have to worry about stepping on various stuff. The read/REPLY echo seems really ugly.. I suppose I could wrap in a function.. butis there a way to read one line from a file descriptor to stdout without closing the fd(like a head -1 would close the fd is I did head -1 <&3)
|
is there a way other than read/echo to read one line from a file descriptor to stdout without closing the fd?
|
Maybe you could try something like the following:tshark OPTIONS 2>&1 | grep --line-buffered PATTERN | while read line; do
# actions for when the pattern is found, the matched input is in $line
break
doneThe2>&1is important so that when PATTERN is matched and the while loop terminates,tsharkhas nowhere to write to and terminates because of the broken pipe.If you want to keeptsharkrunning and analyze future output, just remove thebreak. This way, the while loop never terminates and it keeps reading the filtered output fromtshark.
|
I'm trying to program a little "dirty" website filter - e.g. an user wants to visit an erotic website (based on domain name)So basically, I got something like#!/bin/bash
sudo tshark -i any tcp port 80 or tcp port 443 -V | grep "Host.*keyword"It works great but now I need to do some actions after I find something (iptables and DROPing packets...). The problem I got is that tcp dumping is still running. If I had a complete file with data, the thing I'm trying to reach is easy to solve.In pseudocoude, I'd like to have something like:if (tshark and grep found something)
iptables - drop packets
sleep 600 # a punishment for an user
iptables accept packets I was dropping
else
still look for a match in the tcp dump that's still runningThanks for your help.
|
Greping a tcpdump with tshark
|
Here it is:ctrl left alt Q = @
ctrl left alt ß = \
ctrl left alt ^ = |
ctrl left alt E = €or:right alt Q = @
right alt ß = \
right alt ^ = |
right alt E = €
|
I tried to make thepipeinubuntubut it didn't work. On a german layout it's done by holdingalt+ctrl+the bracket key above <>. InUbuntuit doesn't work and I already tried to choose different keyboard layouts by entering this to the terminal:sudo dpkg-reconfigure keyboard-configurationIs there anyone who made it?
|
Creating the Pipe | with a german Keyboard Layout in Ubuntu 12.04
|
You ran anull command, i.e. a simple command with just one or more redirections. This performs the redirection but nothing else.>fileis a way to truncatefileto zero bytes. A null command ignores its stdin, which is why you don't see thelsoutput.
I believe POSIX leaves this undefined (in fact,zshreads stdin when you type>file). There is an explicit null command named:(colon). Null commands are useful if you just need them for their side effects, i.e. redirection and variable assignment, as in: ${FOO:="default value"} # Assign to FOO unless it has a value already.
|
I accidentally ran the following scripts in Bash:$ ls -l | > ../test.txtAnd I got an empty test.txt.What happened?
|
What happens when pipe char directly followed by the redirection in Bash?
|
Please try this using custom rendered which will solve your problem easilyJTable myTable = new JTable();
// You can specify the columns you need to do the required action
myTable.getColumnModel().getColumn(0).setCellRenderer(new MyRenderer());
public class MyRenderer extends DefaultTableCellRenderer {
// This is a overridden function which gets executed for each action to
/// your Jtable
public Component getTableCellRendererComponent (JTable table,
Object obj, boolean isSelected, boolean hasFocus, int row, int column) {
// Use this row, column to change color for the row you need, e.g.
if (isSelected) { // Cell selected
cell.setBackground(Color.green);
}
}
}Note: this renderer can be used for more than doing color highlighting, please refercustom Jtable rendering. For timing your changes in response to the queue, you can schedule it in a separate thread.
|
I need help.I have two tables.In the instruction table., each row must be highlighted according to what instruction is being execute in the pipeline stages. Say for example., at time t10, I5 is in IS stage, so I5 in instruction table must be highlighted or the color of the row in instruction table must be change.say, I5 row is color red, I6 row is color pink, I7 is color green, I8 is color gray, I9 is color orange.I really need your expertise., thank you.. :)
|
Setting color in a row of a Jtable
|
I am the author of Ruffus and have just checked in changes to ruffus to allow it to cooperate with pweave into the google source code repository. I will be in the next release.You can get the latest (fixed) source with the following command line if you are impatient:hg clone https://[email protected]/p/ruffus/LeoThe details are as follows:Ruffus uses the full qualified name (with module name) of each ruffus task function to uniquely identify code so that pipeline tasks can be referred to by name.The Pweave code was very straightforward. Nice! Pweave sends chunks of code at a time to the python interpretor to beexec-ed chunk by chunk. Of course chunks do not belong to any "module" and task functions havefunction.__module__values ofNonerather than any string.A single judiciousstr()convertingNoneto"None"seems to have solved the problem.Leo
|
I am interested in developing self-documenting pipelines.Can I wrapRuffustasks inPweavechunks?Pweave and Ruffus
==============================================================
**Let's see if Pweave and ruffus can play nice**
<<load_imports>>=
import time
from ruffus import *
@
**Do this**
<<task1>>=
task1_param = [
[ None, 'job1.stage1'], # 1st job
[ None, 'job2.stage1'], # 2nd job
]
@files(task1_param)
def first_task(no_input_file, output_file):
open(output_file, "w")
@I get the feeling the Ruffus decorators are throwing Pweave off:$ Pweave ruffus.Pnw
Processing chunk 1 named load_imports
Processing chunk 2 named task1
<type 'exceptions.TypeError'>
("unsupported operand type(s) for +: 'NoneType' and 'str'",)Perhaps there is a workaround?
|
Can Pweave play nice with Ruffus?
|
TaskSSMClientToolsSetup@1is published by DigiCert to assist with setting up certificate tools in an ADO pipeline.Part of the task involves downloading an MSI installer fromhttps://one.digicert.com/signingmanager/api-ui/v1/releases/smtools-windows-x64.msi/download.Recently this download has been failing and causing the task to fail:file after write 22288095 error when executing setup task of STM
Error: The process 'C:\Windows\system32\msiexec.exe' failed with exit
code 1620A workaround, as you suggest, is to change the task to include automatic retry:Old task yaml:- task: SSMClientToolsSetup@1
displayName: Setup SMToolsNew yaml:- task: SSMClientToolsSetup@1
retryCountOnTaskFailure: 10
displayName: Setup SMToolsFor a custom build agent, I believe it would be possible to manually install the tools on the build agent instead (so they don't need setup every time, and can't fail due to a download error).
|
Azure pipeline fails on the SSMClientToolsSetup@1 task:This installation package could not be opened. Contact the application vendor to verify that this is a valid Windows Installer package.error when executing setup task of STM Error: The process 'C:\Windows\system32\msiexec.exe' failed with exit code 1620.
at ExecState._setResult (D:\a_tasks\SSMClientToolsSetup_63dc66e6-daa5-4c1c-97f2- 4312143a6d6c\1.7.0\node_modules\azure-pipelines-task-lib\toolrunner.js:942:25)
at ExecState.CheckComplete (D:\a_tasks\SSMClientToolsSetup_63dc66e6-daa5-4c1c-97f2-4312143a6d6c\1.7.0\node_modules\azure-pipelines-task-lib\toolrunner.js:925:18)
##[error]The process 'C:\Windows\system32\msiexec.exe' failed with exit code 1620
at ChildProcess. (D:\a_tasks\SSMClientToolsSetup_63dc66e6-daa5-4c1c-97f2- 4312143a6d6c\1.7.0\node_modules\azure-pipelines-task-lib\toolrunner.js:838:19)
at ChildProcess.emit (events.js:198:13)
at maybeClose (internal/child_process.js:982:16)
at Process.ChildProcess._handle.onexit (internal/child_process.js:259:5)I am following these instructions:https://docs.digicert.com/en/digicert-keylocker/ci-cd-integrations/plugins/azure-devops-client-tools-extension.html
|
DigiCert - Azure DevOps yaml task SSMClientToolsSetup@1 fails
|
The stall and forward caused by a the load-use delay is detected by the processor in the situation where there is a Read-After-Write hazard.Thus, when there is no RAW hazard, there is no stall and no forward.So, the following runs at full speed:lw r2,4(r1)
addi r1, r1, 4The standard MIPS 5-stage pipeline has no concept of Write-After-Read hazards (it doesn't have that hazard so it isn't looking for that situation or that hazard).However, in this case where the R2 register is overwritten anyways and it is not being used in the calculation, would it be necessary to stall the pipeline?No, in that sense, your sequence is no different from mine in that both will run at full speed without stall or forward.Then is it necessary to even have this load instruction at the start?No, it is useless.The only possible architectural purpose could be to test the resulting memory address to see if it causes a fault. From a micro-architectural perspective, it might have side effects on the cache.FYI,lw r2,array(r1)is most likely a pseudo instruction, in particular ifarrayis a global data label for the array (in the default memory configuration). When dealing with pipeline hazards, probably best to stick with real MIPS instructions, since that pseudo instruction is expanded by the assembler into 2-3 real instructions. Of course, the last instruction will be a load, so in this particular case there is still a load followed by what comes next.
|
For example if I have some MIPS pseudocode:LW R2, array(R1) //R2 = array[1] = 4
SUB R2, R3, R1 //R2 = R3 - R1Where R1 = 1, R2 = 2, R3 = 3, array[1] = 4I understand that a stall would occur, and forwarding after MEM to EX of SUB instruction would be necessary if R2 was used in the calculation in the SUB instruction. However, in this case where the R2 register is overwritten anyways and it is not being used in the calculation, would it be necessary to stall the pipeline?
Then is it necessary to even have this load instruction at the start?I have tried searching for examples online but I couldn't find an example with this order of instructions. From my understanding of it, there shouldn't be a stall. However, I am a bit confused and needed some pointers.
|
In a MIPS pipeline, will there be a stall in the pipeline if the next instruction overwites the register of the previous register?
|
The following are the three processes you want to create:stdin -> ls -> pipe1
pipe1 -> head -> pipe2
pipe2 -> tail -> stdoutYou have at most one pipe created at time, butheadneeds to communicate with two.Base your loop around the creation of processes, not around the creation of pipes. That means you will need to carry pipe information from one pass to the other.pid_t pids[ num_cmds ];
int next_stdin_fd = 0;
for (int i = 0; i < num_cmds; ++i ) {
int stdin_fd = next_stdin_fd;
int stdout_fd;
if ( i == num_programs - 1 ) {
next_stdin_fd = -1;
stdout_fd = 1;
} else {
int pipe_fds[2];
pipe( pipe_fds );
next_stdin_fd = pipe_fds[ 0 ];
stdout_fd = pipe_fds[ 1 ];
}
pids[ i ] = fork();
if ( pid == 0 ) {
if ( stdin_fd != 0 ) {
dup2( stdin_fd, 0 );
close( stdin_fd );
}
if ( stdout_fd != 1 ) {
dup2( stdout_fd, 1 );
close( stdout_fd );
}
if ( next_stdin_fd != -1 ) {
close( next_stdin_fd );
}
execvp( *cmds[i], cmds[i]+1 );
}
if ( stdout_fd != 1 ) {
close( stdout_fd );
}
}
for (int i = 0; i < num_cmds; ++i ) {
int status;
waitpid( pids[ i ], &status, 0 );
}Error checking omitted for brevity.
|
So I have an assignment where I have to process n commands that are in pipe. These commands are Linux based. If my understanding is correct, I know I have to create a for loop that continuously forks() the child of the main process, and execute those children, which are then connected by pipes. Here's my code so far.void main(void)
{
char **cmds[3];
char *c1[] = { "ls", "-l", "/etc", 0 };
char *c2[] = { "head", "-n", "10", 0 };
char *c3[] = { "tail", "-n", "5", 0 };
cmds[0] = (char **)c1;
cmds[1] = (char **)c2;
cmds[2] = (char **)c3;
int pid, status;
pid = fork();
if(pid == 0){//child proccess
int fd[2];
int infd;
int i;
for(i = 0; i < 2; i++){
pipe(fd);
int ppid;
ppid = fork();
if(ppid > 0){
dup2(fd[1], 1);
close(fd[0]);
//executes the nth command
execvp(*(cmds+i)[0], *(cmds+i));
}else if(ppid == 0){
dup2(fd[0], 0);
close(fd[1]);
//executes the n+1th command
execvp(*(cmds+i+1)[0], *(cmds+i+1));
}
}
}else if (pid > 0){//parents proccess
while((pid = wait(&status)) != -1);
}
}As the program stands right now, I'm able to only pipe the first and second commands, but for some reason the 3rd command goes completely undetected. How would I fix this?
|
How do I process a n commands in pipelines in C?
|
You can use theairflow dags triggercommandand pass data to theconfparameter. This isn't from a Python shell necessarily but can be run from the terminal.If you must be in a Python shell, you can also use thePython SDKorAirflow REST API.
|
May I trigger a DAG in python terminal with a specific parameter?
I have a DAG that runs multiple times a day with some defined parameters.
Example: There is function that manipulates data and publishes a table. It should go back 7 days and read data from the resources and restate the table day by day.
Sometimes I need to restate the data for example for 2 months or even a year. I wish I could do it without changing the DAG.
Is there any way to run a DAG from terminal for example like airflow trigger_dag <DAG_NAME> <arg/parameter>I wanted to create the exact same pipeline and parameterize that day parameter, set the schedule to None and run the python file in terminal. It is possible?
|
Trigger an Airflow DAG in python terminal
|
The value error message is factually correct - thePipelineclass does not contain any business logic dealing with sample weights.However, your pipeline has two steps. And one of the step components - the XGBoost classifier - supports sample weights.So, the solution is to address the sample weights parameter directly to theclassifierstep. According to Scikit-Learn conventions, you can do so by prepending theclassifier__prefix (reads "classifier" plus two underscore characters) to your fit param name.In short:pipeline = Pipeline( steps )
pipeline.fit(X, y, classifier__sample_weights = weights)
|
I want to use thesample_weightparameter with XGBClassifier from thexgboostpackage.The problem happen when I want to use it inside apipelinefromsklearn.pipeline.from sklearn.preprocessing import MinMaxScaler
from sklearn.pipeline import Pipeline
from xgboost import XGBClassifier
clf = XGBClassifier(**params)
steps = [ ('scaler', MinMaxScaler() ), ('classifier', clf ) ]
pipeline = Pipeline( steps )When I runpipeline.fit(x, y, sample_weight=sample_weight)wheresample_weightis just a dictionary with int representing weights, I have the following error:ValueError: Pipeline.fit does not accept the sample_weight parameter.How can I solve this problem? Is there a workaround? I have seen that anissuealready exists.
|
Using sample_weight param with XGBoost through a pipeline
|
extension.yamlinput:
label: ""
file:
paths: [./input/*]
codec: lines
max_buffer: 1000000
delete_on_finish: false
pipeline:
processors:
- bloblang: |
root.name = this.name.uppercase()
root.price = this.price
output:
broker:
pattern: fan_out
outputs:
- file:
path: "result/file1.log"
codec: lines
- file:
path: "result/file2.log"
codec: lines
processors:
- bloblang: |
root.name = this.name.lowercase()It will generate two output files "file1.log" & file2.log".file1.log{"name":"COFFEE","price":"$1.00"}
{"name":"TEA","price":"$2.00"}
{"name":"COKE","price":"$3.00"}
{"name":"WATER","price":"$4.00"}file2.log{"name":"coffee"}
{"name":"tea"}
{"name":"coke"}
{"name":"water"}Here we can use something called 'Broker' in Benthos as shown in the above example.It allows you to route messages to multiple child outputs using a range of brokering patterns.You can refer to this link for more information:Broker
|
Input Data:{ "name": "Coffee", "price": "$1.00" }
{ "name": "Tea", "price": "$2.00" }
{ "name": "Coke", "price": "$3.00" }
{ "name": "Water", "price": "$4.00" }extension.yamlinput:
label: ""
file:
paths: [./input/*]
codec: lines
max_buffer: 1000000
delete_on_finish: false
pipeline:
processors:
- bloblang: |
root.name = this.name.uppercase()
root.price = this.price
output:
file:
path: "result/file1.log"
codec: linesI want to create an output file based on the input and defined processor for our need.
|
How can we generate multiple output file in benthos?
|
The status of a job is determined solely by itsscript:/before_script:sections (the two are simply concatenated together to form the job script).after_script:is a completely different construct -- it is not part of the job script. It is mainly for taking actions after a job is completed.after_script:runs even when jobs fail beforehand, for example.Perthe docs: (emphasis added on the last bullet)Scripts you specify inafter_scriptexecute in a new shell, separate from anybefore_scriptorscriptcommands. As a result, they:Have the current working directory set back to the default (according to the variables which define how the runner processes Git requests).Don’t have access to changes done by commands defined in thebefore_scriptorscript, including:Command aliases and variables exported inscriptscripts.Changes outside of the working tree (depending on the runner executor), like software installed by abefore_scriptorscriptscript.Have a separate timeout, which is hard-coded to 5 minutes.Don’t affect the job’s exit code. If thescriptsection succeeds and theafter_scripttimes out or fails, the job exits with code0(Job Succeeded).
|
Consider this .gitlab-ci.yml:variables:
var1: "bob"
var2: "bib"
job1:
script:
- "[[ ${var1} == ${var2} ]]"
job2:
script:
- echo "hello"
after_script:
- "[[ ${var1} == ${var2} ]]"In this example, job1 fails as expected but job2 succeeds, incomprehensibly. Can I force a job to fail in theafter_scriptsection?Note:exit 1has the same effect as"[[ ${var1} == ${var2} ]]".
|
Gitlab: Fail job in "after_script"?
|
tagged from main branchUnfortunately, this is not possible. Git tags are only associated withcommits, notbranches. Therefore, you cannot create a condition for a tag to be created "from" a branch because that's not how tags work. Also consider that a tagged ref can exist on many branches, or even no branch at all.This is also the reason why thepredefined variablesCI_COMMIT_TAGandCI_COMMIT_BRANCHwillneverbe present together. If a pipeline is associated with a tag, it cannot be associated with a branch and vice versa.The best you might be able to do is to run only on tags, then check if the tagged ref exists inmainin the job itself. Unfortunately this is not possible to do withrules:.
|
I just want to run pipelines when tagged frommainbranch. I tried usingworkflowbut it doesn't work.This is my.gitlab-ci.ymlfile.workflow:
rules:
- if: '$CI_COMMIT_BRANCH == "develop"'
variables:
CHART_GIT_URL: $CHART_DEV_URL
CHART_VALUES_FILE: "values-dev.yaml"
DOCKER_IMAGE_TAG: "dev-$CI_COMMIT_SHORT_SHA"
- if: $CI_COMMIT_TAG && $CI_COMMIT_BRANCH == "main"
variables:
CHART_GIT_URL: $CHART_PROD_URL
CHART_VALUES_FILE: "values-prod.yaml"
DOCKER_IMAGE_TAG: "v$CI_COMMIT_TAG"
stages:
- build and push
- deploy
package Docker image:
stage: build and push
before_script:
- docker login $DOCKER_REGISTRY -u $DOCKER_REGISTRY_USER -p $DOCKER_REGISTRY_PASSWD
script:
- docker build -t $DOCKER_IMAGE_NAME:$DOCKER_IMAGE_TAG .
- docker push $DOCKER_IMAGE_NAME:$DOCKER_IMAGE_TAG
rules:
- if: '$CI_COMMIT_BRANCH == "develop"'
- if: $CI_COMMIT_TAG && $CI_COMMIT_BRANCH == "main"Thanks for the help!
|
How to run Gitlab-CI pipelines only branch and tag?
|
Merge request approval settingsincluding "Remove all approvals when commits are added to the source branch" are configurable per project. These settings also require a Premium subscription.So, you may experience differences based on how the project is configured or the subscription level of the customer/instance.
|
In gitlab is there a setting that affects if an approval for an MR is invalidated after a new commit is pushed? I have seen that in some setups a new commit invalidates the approval but in others not so wondering what is the difference
|
Merge request approval invalidates after new push
|
In order to stop pipeline execution when a new branch is created and in the same time run when a new commit happens on a branchTry change from:workflow:
rules:
- if: $CI_MERGE_REQUEST_IID
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCHToworkflow:
rules:
- if: $CI_COMMIT_BEFORE_SHA == "0000000000000000000000000000000000000000"
when: never
- if: $CI_MERGE_REQUEST_IID
- if: $CI_PIPELINE_SOURCE == "push"The rule- if: $CI_COMMIT_BEFORE_SHA == "0000000000000000000000000000000000000000"
when: neverWill stop the execution of a new pipeline when a new branch is createdThe rule- if: $CI_PIPELINE_SOURCE == "push"This is added to allow a new pipeline trigger when a commit happens on branch,
because if the event is not a merge request the pipeline won't execute.
|
Currently gitlab runs a pipeline when creating a merge request + branch with the GUI.Is it possible to skip this pipeline, since it's only repeats the last pipeline from the default branch?We tried with:workflow:
rules:
- if: $CI_MERGE_REQUEST_IID
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCHThis works to skip new branch pipelines, but it doesn't run a pipeline for new commits on branches which have no merge request.
|
Run gitlab pipeline only for commits
|
I don’t believe you can do that without executing the job at all since Jenkins needs to retrieve the latest Jenkinsfile in order to know about any updates to the parameters.This solutioncould be helpful.Basically, you set a Boolean parameter for the job and then if that is true the job will execute but all stages would be skipped. Thus the parameters of the job will get updated and the job will complete quickly.
|
I'm using this Jenkins pipeline to create parameters from a list of directories in a Gitlab repository. New folders are constantly being uploaded to this repository, but i forced to execute this Pipeline each time that commit is executed (I want execute Jenkins pipeline manually and select an specific directory). How can I update parameters list without run pipeline job?node {
checkout scm
def backups = sh(returnStdout: true, script: "ls $WORKSPACE")
backups.split().each {
backupArray << it
}
}
pipeline {
parameters { choice(name: 'BACKUP', choices: backupArray, description: 'Selecciona el backup para realizar el Rollback') }
agent any
stages {
stage('Realizando el Rollback con el Backup seleccionado') {
steps {
script {
sh "echo Se realiza el Rollback con el Backup $params.BACKUP"
withCredentials([string(credentialsId: 'ID', variable: 'VARIABLE')]) {
sh "script.sh $params.BACKUP"
}
}
}
}
}
post {
always {
script {
sh 'rm -rf *'
}
}
}
}
|
Update parameters dinamically Jenkins pipeline
|
TL;DR: object magic.WhenGet-Childitemis run, it will return a collection of objects. This is passed toSelect-String. The cmdlet'ssourceis available on Github. There's aProcessRecord()method that, well, processes input objects. It contains a few checks for object type and if those are directories. Like so,if (_inputObject.BaseObject is FileInfo fileInfo)
...
if (expandedPathsMaybeDirectory && Directory.Exists(filename))
...Thus, it doesn't matter thatGet-Child's collection contains bothDirectoryInfoandFileInfoobjects; the cmdlet is smart enough to figure out what it's trying to consume.
|
Trying to useselect-stringon a directory results in an error:PS C:\> select-string pattern P:\ath\to\directory
select-string : The file P:\ath\to\directory cannot be read: Access to the path 'P:\ath\to\directory' is denied.However, when I useget-childItem -recurse, the command finishes without problem:PS C:\> get-childItem -recurse P:\ath\to | select-string patternThis sursprises me becauseget-childItem recursealso passes directories down the pipeline. So, I'd have assumed thatselect-stringraises the same error when it processes the first directory.Since this is not the case, I wonder where or how the directory is filtered out in the pipeline.
|
Why does "get-childItem -recurse | select-string foo" not result in an error if there are subdirectories?
|
Laravelhas commands and a scheduler, combining these two gives exactly what you want.Create your command inConsole\Commandsfolder, with your desired logic. Your question is sparse, so most of it is pseudo logic and you can adjust it for your case.namespace App\Console\Commands;
class StatusUpdater extends Command
{
protected $signature = 'update:status';
protected $description = 'Update status on your model';
public function handle()
{
$models = YourModel::whereDate('date', now())->get();
$models->each(function (YourModel $model) {
if ($model->status === 'wrong') {
$model->status = 'new';
$model->save();
}
});
}
}For this command to run daily, you can use the scheduler to schedule the given command. Go toCommands\Kernel.phpwhere you will find a schedule() method.use App\Commands\StatusUpdater;
use Illuminate\Console\Scheduling\Schedule;
class Kernel extends ConsoleKernel
{
protected function schedule(Schedule $schedule)
{
$schedule->command(StatusUpdater::class)->daily();
}
}
}For scheduling to work, you have to add the following command tocronjobon your server. Which is described in theLaravel documentation.* * * * * cd /path-to-your-project && php artisan schedule:run >> /dev/null 2>&1
|
This is somewhat a design question for my current laravel side project.So currently I have a table that stores a status value in one column and the date when that status should be altered. No I want to alter that status value automatically when the date is stored is the current date. Since that table will gain more rows about time I have to perform that altering process in a repeating manner. As well as that I want to perform some constraint checks on the data as well.Im sure laravel is capable of doing that, but how?
|
Laravel perform scheduled job on database
|
If you haveGit for Windows, you have 200+ Linux command accessible in<Path\to\GIt>\usr\binThat includes commandrm.exeYourClean-reportssteps can then become:rm -Rf cypress\reports && ...That command would be interpreted in a Linux environment.
|
I am working on a cypress project. The package.json has commands to delete old report folder and create new folder with same name.I achieve this by using windows commands in package.json:“Clean-reports”: “rmdir /s /q cypress\reports && mkdircypress\reports”
“Test”:”npm run Clean-reports && cypress run”But when running this project on gitlab pipeline It gets stuck on rmdir command.How do we achieve directory deletion and creation when running tests over gitlab pipeline ??
|
Gitlab pipeline fails since the package.json has windows commands
|
If you have 3 tasks which can be merged into one and what you want to achieve is only to have 3 separated functions running in the same container to make the .gitlab-ci.yml file easier to understand, I would recommend usingyaml anchors(see below)..provision_template: &provision_definition
- XXX
.cpp_tests_template: &cpp_tests_definition
- YYY
.python_tests_template: &python_tests_definition
- ZZZ
my_job:
script:
- *provision_definition
- *cpp_tests_definition
- *python_tests_definition
|
I've 3 stages:- provision
- cpp tests
- python testsI need to run provision before running tests. Gitlab suggests using artifacts to pass result between stages but I'm afraid it's not possible in my scenario since ansible does lots of different stuff (not just generate a few config files/binaries). Ideally, I'd like to be able to run all three stages in the same container because in my scenario stages are logical and essentially can be merged into one. I'd want to avoid it as this would make.gitlab-ci.ymlharder to understand.
|
How to run multiple stages in the same container in gitlab?
|
Assuming you are using the ':latest' tag in docker-compose, the latest version of the image will always be pulled when you run this:docker-compose down && docker-compose build --pull && docker-compose up(be warned that the upgrade may cause a very slight downtime while the container images are being pulled)This can be combined with the webhook support of Docker Hub in order to run this command when a new image is pushed. Seehttps://docs.docker.com/docker-hub/webhooks/You would need some endpoint for receiving the POST call from the webhook and execute the command, for example this:https://github.com/adnanh/webhookIt can be configured as an HTTP endpoint to receive the webhook from Docker Hub when the new image is pushed, to run the command above. For security reasons I would advise to use an HTTPS endpoint, and an IP whitelist for the incoming webhook that only allows traffic from Amazon ELB IPs (as that's what DockerHub uses).
Additional you may want to verify that the Callback URL is fromhttps://registry.hub.docker.com/.Unfortunately DockerHub does not yet support the use of a secret to validate the caller:https://github.com/docker/roadmap/issues/51
|
I want to use a buddy pipeline to push new images to DockerHub. When new images are pushed, the Google Container Optimized OS should pull the new ones. I'm using a Google Computer engine to host docker-compose on Google Container optimized os.
How can I do this?
|
How to update the containers in Googles container optimized os?
|
You need to put the transform function first, then the columns as subsequent arguments, if you check out the help page, it writes:sklearn.compose.make_column_transformer(*transformers, **kwargs)Some like below will work:from sklearn.preprocessing import StandardScaler, OneHotEncoder,MinMaxScaler
from sklearn.compose import make_column_transformer
from sklearn.pipeline import make_pipeline
from xgboost import XGBClassifier
import numpy as np
import pandas as pd
X = pd.DataFrame({'x1':np.random.uniform(0,1,5),
'x2':np.random.choice(['A','B'],5)})
y = pd.Series(np.random.choice(['0','1'],5))
numeric_cols = X.select_dtypes('number').columns.to_list()
categorical_cols = X.select_dtypes('object').columns.to_list()
preprocess = make_column_transformer(
(MinMaxScaler(),numeric_cols),
(OneHotEncoder(),categorical_cols)
)
model = make_pipeline(preprocess,XGBClassifier())
model.fit(X,y)
Pipeline(steps=[('columntransformer',
ColumnTransformer(transformers=[('minmaxscaler',
MinMaxScaler(), ['x1']),
('onehotencoder',
OneHotEncoder(), ['x2'])])),
('xgbclassifier', XGBClassifier())])
|
i have a problem where i am trying to apply transformations to my catgeorical feature 'country' and the rest of my numerical columns. how can i do this as i am trying below:preprocess = make_column_transformer(
(numeric_cols, make_pipeline(MinMaxScaler())),
(categorical_cols, OneHotEncoder()))
model = make_pipeline(preprocess,XGBClassifier())
model.fit(X_train, y_train)note that numeric_cols is passed as a list and so is categorical_cols.howeveri get this error: TypeError: All estimators should implement fit and transform, or can be 'drop' or 'passthrough' specifiers.along with a list of all my numerical columns(type <class 'list'>) doesn't.what am i doing wrong, also how can i deal with unseen categories in column country?
|
creating a pipeline for onehotencoded variables not working
|
As szymon-bednorz commented ,generally we don't fit_transform on test data, rather we go forfit_transform(X_train)andtransform(X_test).This works pretty well, when your training and test data are from same distribution, and size ofX_trainis greater thanX_test.Further as you found while debugging that fitting through pipeline gives same accuracy as fitting logistic regression hints thatX_trainandX_testalready scaled. Although I am not sure about this.
|
I am trying to construct a pipeline with a StandardScaler() and LogisticRegression(). I get different results when I code it with and without the pipeline. Here's my code without the pipeline:clf_LR = linear_model.LogisticRegression()
scalar = StandardScaler()
X_train_std = scalar.fit_transform(X_train)
X_test_std = scalar.fit_transform(X_test)
clf_LR.fit(X_train_std, y_train)
print('Testing score without pipeline: ', clf_LR.score(X_test_std, y_test))My code with pipeline:pipe_LR = Pipeline([('scaler', StandardScaler()),
('classifier', linear_model.LogisticRegression())
])
pipe_LR.fit(X_train, y_train)
print('Testing score with pipeline: ', pipe_LR.score(X_test, y_test))Here is my result:Testing score with pipeline: 0.821917808219178
Testing score without pipeline: 0.8767123287671232While trying to debug the problem, it seems the data is being standardized. But the result with pipeline matches the result of training the model on my original X_train data (without applying StandardScaler()).clf_LR_orig = linear_model.LogisticRegression()
clf_LR_orig.fit(X_train, y_train)
print('Testing score without Standardization: ', clf_LR_orig.score(X_test, y_test))
Testing score without Standardization: 0.821917808219178Is there something I am missing in the construction of the pipeline?
Thanks very much!
|
Sklearn.pipeline producing incorrect result
|
I belevie that if you give permissons to your build service user it'll work.
|
There is a big project, it will clone multiple projects and build them. So I hope to tag these projects with a version tag.But it prompts me to need 'GenericContribute' permission, logs belowgit -c http.extraheader="AUTHORIZATION: bearer ***" tag 1.15.11
git -c http.extraheader="AUTHORIZATION: bearer ***" push origin 1.15.11
remote: 0000000000aaTF401027: You need the Git 'GenericContribute' permission to perform this action. Details: identity 'Build\0dea3e47-c818-4a83-ae9d-0422c80128c5', scope 'repository'.
2020-07-16T11:11:55.7839182Z [11:11:55.780 INF] remote: TF401027: You need the Git 'GenericContribute' permission to perform this action. Details: identity 'Build\0dea3e47-c818-4a83-ae9d-0422c80128c5', scope 'repository'.
|
How to tag repository in Azure pipeline? You need the Git 'GenericContribute' permission to perform this action
|
You can add it asAppSettingsto the Application Service by setting upAzureRmWebAppDeploymenttask in the pipeline, here is a template YAML sample:#11. Deploy web app
- task: AzureRMWebAppDeployment@4
displayName: Deploy web app on Linux container
inputs:
appType: webAppContainer
DockerImageTag: ${{ parameters.image_tag }}
DockerNamespace: ${{ parameters.registry_url }}
DockerRepository: ${{ parameters.repository_name }}
...
AppSettings: ${{ parameters.webapp_settings }}example of webapp_settings values:webapp_settings: '-key1 "text" -key2 12 -otherKey 34'App Settings are injected into your app as environment variables at runtime. (see here)
|
I have a docker image that requires an ENV variable to start in the correct mode (eg. $ docker run -e "env_var_name=another_value" ...). The Dockerfile starts with:FROM nginx:alpine
ENV env_var_name standaloneIn the Azure Devops release pipeline I use AzureRmWebAppDeployment@3 to create the docker container but I do not understand where I can do the ENV settings. I cannot use the .env file.
many thanks
|
Azure App Service Deploy, how to set docker ENV variables
|
Very good question !We already have something to solve this.Suppose you code a class like this:class MyWrapper(BaseStep):
def transform(self, data_inputs):
return sigmoid(data_inputs)
def predict_proba(self, data_inputs):
return data_inputsYou could do as follow:step = MyWrapper()Then, once you're ready to replace the method, useNeuraxle's mutatefunction:step = step.mutate(new_method='predict_proba', method_to_assign_to='transform')And then, whenever.transform()will be called, thepredict_probamethod will be called instead. The mutate will work even if yourstepis wrapped (nested) deeper within other steps.Note that we should probably modify the sklearn wrapper to allow this. I've added the issue here:https://github.com/Neuraxio/Neuraxle/issues/368So until this issue is fixed, you could doclass MySKLearnWrapper(SKLearnWrapper): ...(inheriting from SKLearnWrapper to modify it) and def thepredict_probaby yourself like it was suggested here:https://github.com/Neuraxio/Neuraxle/pull/363/files
|
I'm trying to setup aNeuraxle Pipelinethat uses sklearns OneVsRestClassifier (OVR).Every valid step in a Neuraxle pipeline has to implement thefit()andtransform()methods.In order to use sklearns pipeline steps, Neuraxle uses aSKLearnWrapperthat maps the OVRspredict()method to thetransform()method of the SKLearnWrapper.Is there a way to modify this behavior so that thepredict_proba()method is mapped to the OVRstransform()method instead?Or is there another way of retrieving the calculated probabilities?
|
Using predict_proba() instead predict() in Neuraxle Pipeline with OneVsRestClassifier
|
I actually misspelled the name of the template found in my GitHub repository. Everything worked well after I correct it.
But I think this error was not explicit at all, I was expecting something like 'template not found in the path specified', instead of 'Invalid template path template.yml'
|
I am working on a CI/CD project(Using circleci pipeline) and currently, I am stuck on getting my "create_infrastructure" job to work. Below is the job# AWS infrastructure
create_infrastructure:
docker:
- image: amazon/aws-cli
steps:
- checkout
- run:
name: Ensure backend infrastructure exist
command: |
aws cloudformation deploy \
--template-file template.yml \
--stack-name my-stackWhen I run the job above, it returnsInvalid template path template.ymlWhere am I suppose to keep thetemplate.ymlfile?I placed it in the same location as theconfig.ymlin the project's GitHub repository(Is this right?)Could the problem on the line--template-file template.ymlin my job? (I am a beginner here).
Please I need help.
|
circleci config.yml: 'Invalid template path template.yml'
|
You are almost there. Similar to how you created multiple dictionaries forSVCmodel, create a list of dictionaries for the pipeline.Try this example:from sklearn.datasets import fetch_20newsgroups
from sklearn.pipeline import pipeline
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.svm import SVC
categories = [
'alt.atheism',
'talk.religion.misc',
'comp.graphics',
'sci.space',
]
remove = ('headers', 'footers', 'quotes')
data_train = fetch_20newsgroups(subset='train', categories=categories,
shuffle=True, random_state=42,
remove=remove)
pipe = Pipeline([
('bag_of_words', CountVectorizer()),
('estimator', SVC())])
pipe_parameters = [
{'bag_of_words__max_features': (None, 1500),
'estimator__C': [ 0.1, ],
'estimator__gamma': [0.0001, 1],
'estimator__kernel': ['rbf']},
{'bag_of_words__max_features': (None, 1500),
'estimator__C': [0.1, 1],
'estimator__kernel': ['linear']}
]
from sklearn.model_selection import GridSearchCV
grid = GridSearchCV(pipe, pipe_parameters, cv=2)
grid.fit(data_train.data, data_train.target)
grid.best_params_
# {'bag_of_words__max_features': None,
# 'estimator__C': 0.1,
# 'estimator__kernel': 'linear'}
|
Suppose I have thisPipelineobject:from sklearn.pipeline import Pipeline
pipe = Pipeline([
('my_transform', my_transform()),
('estimator', SVC())
])To pass the hyperparameters to my Support Vector Classifier (SVC) I could do something like this:pipe_parameters = {
'estimator__gamma': (0.1, 1),
'estimator__kernel': (rbf)
}Then, I could useGridSearchCV:from sklearn.model_selection import GridSearchCV
grid = GridSearchCV(pipe, pipe_parameters)
grid.fit(X_train, y_train)We know that alinearkernel does not use gamma as a hyperparameter.So, how could I include thelinearkernel in this GridSearch?For example, In a simpleGridSearch(without Pipeline) I could do:param_grid = [
{'C': [ 0.1, 1, 10, 100, 1000],
'gamma': [0.0001, 0.001, 0.01, 0.1, 1],
'kernel': ['rbf']},
{'C': [0.1, 1, 10, 100, 1000],
'kernel': ['linear']},
{'C': [0.1, 1, 10, 100, 1000],
'gamma': [0.0001, 0.001, 0.01, 0.1, 1],
'degree': [2, 3],
'kernel': ['poly']}
]
grid = GridSearchCV(SVC(), param_grid)Therefore, I need a working version of this sort of code:pipe_parameters = {
'bag_of_words__max_features': (None, 1500),
'estimator__kernel': (rbf),
'estimator__gamma': (0.1, 1),
'estimator__kernel': (linear),
'estimator__C': (0.1, 1),
}Meaning that I want to use as hyperparameters the following combinations:kernel = rbf, gamma = 0.1
kernel = rbf, gamma = 1
kernel = linear, C = 0.1
kernel = linear, C = 1
|
Using Pipeline with GridSearchCV
|
When you look into the documentation ofTransformedTargetRegressorit says that the attribute.regressor_(note the trailing underscore) returns thefittedregressor. Hence, your call should look like:model.regressor_.named_steps['estimator'].feature_importances_Your previous calls were just returning an unfitted clone. That's were the error came from.
|
I setup a small pipeline with scikit-Learn that I wrapped in aTransforedTargetRegressorobject. After the training, I would like to access the attribute from my trained estimator (e.g.feature_importances_). Can anyone tell me how this can be done?from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.ensemble import RandomForestRegressor
from sklearn.preprocessing import MinMaxScaler
from sklearn.compose import TransformedTargetRegressor
# setup the pipeline
Pipeline(steps = [('scale', StandardScaler(with_mean=True, with_std=True)),
('estimator', RandomForestRegressor())])
# tranform target variable
model = TransformedTargetRegressor(regressor=pipeline,
transformer=MinMaxScaler())
# fit model
model.fit(X_train, y_train)I tried the following:# try to access the attribute of the fitted estimator
model.get_params()['regressor__estimator'].feature_importances_
model.regressor.named_steps['estimator'].feature_importances_But this results in the followingNotFittedError:NotFittedError: This RandomForestRegressor instance is not fitted yet.
Call 'fit' with appropriate arguments before using this method.
|
How to access attribute from a trained estimator in TransformedTargetRegressor pipeline in scikit-learn?
|
No, it doesn't. If pipeline scaled labels too, you would get scaled predictions as well.
|
My confusion is about pipeline. Suppose my code ispipe=Pipeline([('sc',StandardScaler()),
('pca',PCA(n_components=2)),
('lr',LinearRegression())])and i calledpipe.fit(X_train,y_train). Does this also scale they_trainvalues?
|
Does my sklearn pipeline also scale my dependent variables y?
|
You can use$objectToArrayand$filterto achieve this:db.collection.aggregate([
{
$addFields: {
rootAsArray: {
$filter: {
input: {$objectToArray: "$$ROOT"},
as: "field",
cond: {
$and: [
{$ne: ["$$field.k", "_id"]},
{$ne: ["$$field.k", "item"]},
...any other field that's not relevant, you can also just add these to input arr ...
{$not: {$setIsSubset: [["$$field.k"], ["field_1", "field_5"]]}}
]
}
}
}
}
},
{
$project: {
item: "$item",
OTHER: {
$sum: {
$map: {
input: "$rootAsArray",
as: "value",
in: "$$value.v"
}
}
}
}
}
]);
|
I've got this problem and I can't seem to solve it.A sample of my data:[
{'item': 'U',
'field_1': 3,
'field_2': 1,
'field_3': 1,
'field_4': 2,
'field_5': 5,
:
:
:
},
{'item': 'Y',
'field_1': 9,
'field_2': 2,
'field_3': 3,
'field_4': 5,
'field_5': 1,
:
:
:
}
]I would like to create a new field called REST, which will be the sum of fields not in my input array ([field_1, field_5]).My desired result is this (for input[field_1, field_5]):[
{'item': 'U',
'REST': 13,
},
{'item': 'Y',
'REST': 20
}
]Mongo gurus please help!, Deeply appreciate it. Thanks!
|
How to merge columns in Mongo
|
Hypothetically, yes. Forwarding into the MEM stage directly would make it possible to execute dependentLWandSWback-to-back. As long as the loaded word isstoredby theSWat least. It wouldn't be possible to have theSWuse that loaded word as the base of the address without a pipeline bubble, otherwise it would require forwarding back in time.But typically you would see a pipeline such as below (source: a model of a 5 stage pipelined MIPS in SIM-PL), with only one forwarder which feeds into EX. With a setup like that, there is no way to forward fromLWintoSW, the hardware required for it isn't there.
|
I am confused as to how a Store Word Instruction coming after an LW using the same $rt causes a pipeline stall in MIPS.
Consider this block of code:lw $s0, 0($t0)
sw $s0, 12($t0)
lw $s1, 4($t0)
sw $s1, 16($t0)
lw $s2, 8($t0)
sw $s2, 20($t0)Here 3 words are being shifted around in memory. For e.g in the first 2 lines, $s0 is loaded into ,
and then its contents are saved back in the memory. I'm not sure if the sw instruction required $s0 in EX stage or in MEM stage. if it is needed in MEM stage, wouldn't it be resolved just by forwarding without needing to stall the pipeline?
|
MIPS Pipeline Stalls: SW after LW
|
I don't think there's a way to write theunstringfunction you want, but you can do this:makeContrastsFromString <- function(s)
eval(parse(text = paste("makeContrasts(", s, ")")))thenmakeContrastsFromString(aarts_2)should give you want you want. I haven't tested it, since I can't installlimmato getmakeContrasts. My function is pretty fragile; if a user breaks up the lines into separate elements of a string vector, it won't work. I'll leave it to you to make it robust against that kind of thing.
|
I have am using themakeContrastsfunction as part of a pipeline (with limma).
I have several studies, which are entered into the pipeline one after the other. For two of which, the makeContrasts functions looks like this:aarts_1_cm = makeContrasts(R10d = labelR - labelP,
R1nMRap = labelR1 - labelP,
R10nMRap_OSKM = labelR10 - labelO,
levels = Design)andaarts_2_cm = makeContrasts(OSKM14 = labelO14 - labelP14,
OSKM14mTORsh_OSKM14p21sh = labelOT14 - labelOp14,
OSKM20mTORsh_OSKM20p21sh = labelOT20 - labelOp20,
levels = Design)As the contrasts are different for each study, I cannot incorporate them into the pipeline. I have therefore turned the contents of the function into a string:aarts_2 = "OSKM14 = labelO14 - labelP14,
OSKM14mTORsh_OSKM14p21sh = labelOT14 - labelOp14,
OSKM20mTORsh_OSKM20p21sh = labelOT20 - labelOp20,
levels = Design"So that I can then domakeContrasts(unstring(aarts_2)), but I don't know how to unstringaarts_2so that the function will read it. Or if there is a better way to do this. I would appreciate any help with this.Thanks.
|
Turn string into the contents of a function in R for pipeline
|
Use thecommon-OutVariableparameterto captureGet-ChildItem's output in a variable of your choice, separately from what the pipeline overall outputs:Get-ChildItem c:\ -Include *.exe.config -Recurse -OutVariable files |
Select-String 'string1','string2'Variable$filenow contains allSystem.IO.FileInfoinstances output byGet-ChildItem; note how you must pass the variable namewithoutthe$sigil to-OutVariable.A small caveat is that-OutVariable, as of PowerShell [Core] 7.0:collects multi-object output in aSystem.Collections.ArrayListinstance rather than in a regular PowerShell array of type[object[]](fixed-size array ofSystem.Objectinstances).even collectssingle-object output that way (wrapped in a collection) - whereas direct pipeline output just emits the objectitself.Seethis answerfor background information and a workaround.
|
I want to search for all files, of .exe.config for certain strings. I want to list the files with those strings and then list all of the files that were checked. I can get it to find the files with the strings with:Get-ChildItem -Path c:\ -I *.exe.config -R| Select-String 'string1','string2'My issue, I can't figure out how to get it to show all the .exe.config files that it checked (without searching the computer again). I thought I could save the intial search results into a variable. Then run through that with a for loop, but I can't even get the right info into a variable. I tried several variations of below, but $files is always empty. Also, I'm not sold on this approach, so if anyone has a completely different method, that would be fine$files = (Get-ChildItem -Path c:\ -I *.exe.config -R).Path
$files = (Get-ChildItem -Path c:\ -I *.exe.config -R | select fullname)
|
List files with strings and then ALL files checked with powershell
|
Here is a way to make this work using alogfunction that is defined in a shared library invars\log.groovy:import java.io.FileWriter
import java.io.BufferedWriter
import java.io.PrintWriter
// The annotated variable will become a private field of the script class.
@groovy.transform.Field
PrintWriter writer = null
void call( String msg ) {
if( ! writer ) {
def fw = new FileWriter(file, true)
def bw = new BufferedWriter(fw)
writer = new PrintWriter(bw)
}
try {
writer.println(msg)
[...]
} catch (e) {
[...]
}
}After all, scripts in thevarsfolder are instanciated as singleton classes, which is perfectly suited for a logger. This works even [email protected] in pipeline is as simple as:log 'some message'
|
I am working on a scripted Jenkins-Pipeline that needs to write a String with a certain encoding to a file as in the following example:class Logger implements Closeable {
private final PrintWriter writer
[...]
Logger() {
FileWriter fw = new FileWriter(file, true)
BufferedWriter bw = new BufferedWriter(fw)
this.writer = new PrintWriter(bw)
}
def log(String msg) {
try {
writer.println(msg)
[...]
} catch (e) {
[...]
}
}
}The above code doesn't work sincePrintWriterist not serializable so I know I got to prevent some of the code from being CPS-transformed. I don't have an idea on how to do so, though, since as far as I know the@NonCPSannotation can only be applied to methods.
I know that one solution would be to move all output-related code tolog(msg)and annotate the method but this way I would have to create a new writer every time the method gets called.Does someone have an idea on how I could fix my code instead?Thanks in advance!
|
Jenkins scripted Pipeline: How to apply @NonCPS annotation in this specific case
|
SonarQube scanner failing with line out of rangeIn general, this issue occurred with one file that went down on number of lines, then sonar use the cache, that is why it looked for a line out of range.Just like user1014639 said:The problem was due to the old code coverage report that was generated
before updating the code. It was fixed after generating coverage
reports again. So, please alsomake sure that the any coverage reports
that are left behind from the previous run are cleared and new
coverage reports are in place.So, please try to run the command line:mvn clean test sonar:sonarto clean the old report.Besides, if above not help you, you should make sure analyzed source code is strictly identical to the one used to generate the coverage report:Checkthis threadfor some details.Hope this helps.
|
We have AzureDevops build pipeline. Where we have the following steps.Prepare Analysis for SonarQubeRun unit testsRun integration testsRun code analysisFor #4, when we try to Run Code Analysis, it is giving some weird error from SonarQube scanner.java.lang.IllegalStateException: Line 92 is out of range in the fileBut file has only 90 lines of code. I am not sure why it is complaining this?
|
SonarQube scanner failing with line out of range
|
Thanks for your help,I found the solution :gridCV_MLPR.best_estimator_.named_steps['MLPR'].n_iter_As thegridCV_MLPR.best_estimator_is a pipeline, we need to select the MLPRegressor parameters with.named_steps['MLPR'].Thanks a lot for your very, very quick answer ...
|
I made aGridsearchCVwith a pipeline and I want to extract one attribute (n_iter_) of a component of the pipeline (MLPRegressor) for the best model.I'm using Python 3.0.Creation of the pipelinepipeline_steps = [('scaler', StandardScaler()), ('MLPR', MLPRegressor(solver='lbfgs', early_stopping=True, validation_fraction=0.1, max_iter=10000))]
MLPR_parameters = {'MLPR__hidden_layer_sizes':[(50,), (100,), (50,50)], 'MLPR__alpha':[0.001, 10, 1000]}
MLPR_pipeline = Pipeline(pipeline_steps)
gridCV_MLPR = GridSearchCV(MLPR_pipeline, MLPR_parameters, cv=kfold)
gridCV_MLPR.fit(X_train, y_train)When I want to extract the best model withgridCV_GBR.best_params_, I only have the result for the GridsearchCV :{'MLPR__alpha': 0.001, 'MLPR__hidden_layer_sizes': (50,)}But I want to have the number of iteration of MLPRegressor used by the best model ofgridCV_MLPR.How is it possible to use then_iter_attribute designed forMLPRegressor()through the pipeline with GridsearhCV ?
|
Extract an MLPRegressor attributes (n_iter_ ) for the best model in a pipeline with GridsearchCV?
|
No, currently you can't configure the email templates. there is a popularfeature requestabout it, you can up vote there.As workaround, you can install theSend Emailtask, and add it to the release pipeline, in this task you can customize the email.
|
Closed.This question isnot about programming or software development. It is not currently accepting answers.This question does not appear to be abouta specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic onanother Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.Closed1 year ago.Improve this questionI have configuredMicrosoft Azure DevOpsto build our software and release it automatically. (With the Build and with the release Pipeline)
After the succesful release I have set it up, to send an email to all project-members.My question is: Can I somehow configure this email?E.g. I need to remove the "Summary" part. Is this somehow possible with Azure Devops?Screenshot of current email:
|
Configure Azure DevOps email template [closed]
|
I was just dealing with this today and was able to resolve it by unchecking'Fail on Standard Error'in the Advanced options of the Bash task. The step will still fail if your script returns a non-zero exit code, but will succeed if you return 0.So, if I want to return a success I have the scriptexit 0If I want to throw an error, I'll do anexit 1. For example, I have a script that does something like this:if [ "$someResult" == "yay it worked" ] then
echo "Success!"
exit 0
else
echo "Failsauce!"
exit 1
fiI was thinking for a long time that I must have that option enabled to be able to return a failure into the pipeline, but it isn't the case apparently.
|
I am running the following-bashcommand:bash: $(ci_scripts_path)/01_install_python_tools.sh
displayName: 'Install python 2.7 tools'
failOnStderr: truewhile the sh script01_install_python_tools.shcompletes successfully, but anyhow I get this error for the step:##[error]Bash wrote one or more lines to the standard error stream.
|
I am getting "Bash wrote one or more lines to the standard error stream" in Azure pipeline step
|
What you want to use here (I think) is scikit learn's OneHotEncoderfrom sklearn.preprocessing import OneHotEncoder
encoder = OneHotEncode(categories = "auto")
X_train_encoded = encoder.fit_transform("X_train")
X_test_encoded = encoder.transform("X_test")This keeps thefit_transformsyntax and ensures X_test_encoded has the same shape as X_train_encoded. It can also be used in a pipeline as you mentioned instead ofDummies(). Example:pipe1=make_pipeline(OneHotEncoder(categories = "auto"), StandardScaler(), PCA(n_components=7), LogisticRegression())
|
I'm trying to create a get_dummies Class for my Data which I want to use it in Pipeline later:class Dummies(BaseEstimator, TransformerMixin):
def transform(self, df):
dummies=pd.get_dummies(df[self.cat],drop_first=True) ## getting dummy cols
df=pd.concat([df,dummies],axis=1) ## concatenating our dummies
df.drop(self.cat,axis=1,inplace=True) ## dropping our original cat_cols
def fit(self, df):
self.cat=[]
for i in df.columns.tolist():
if i[0]=='c': ## My data has categorical cols start with 'c'
self.cat.append(i) ## Storing all my categorical_columns for dummies
else:
continueNow when I call fit_transform on X_train and then transform X_testz=Dummies()
X_train=z.fit_transform(X_train)
X_test=z.transform(X_test)The columns in shape of X_train and X_test are Different:X_train.shape
X_test.shapeOutput:(10983, 1797)
(3661, 1529)There are more Dummies in X_train than in my X_test.
Clearly, my X_test has fewer categories than X_train. How do I write logic in my Class such that the categories in X_test broadcast to the shape of X_train? I want X_test to have the same number of dummy variables as my X_train.
|
Shape mismatch when One-Hot-Encoding Train and Test data. Train_Data has more Dummy Columns than Test_data while using get_dummies with pipeline
|
If you want one jenkins to talk to kubernetes API with different service accounts you need to create multiple Jenkins "clouds" in configuration, each with different credentials. Then in your pipeline you set the "cloud" option to choose the right one
|
I've a huge pipeline with different developer groups with several permission levels.(For using Jenkins Kubernetes Plugin .)For exampleQAteams andDeveloperteams has different service accounts at kubernetes cluster.So I need create some connection with kubernetes clusters but every connection I change context of cluster with namespace name .I want to use multiple namespaces at kubernetes context .
That is my own kubernetes context file .- context:
cluster: minikube
namespace: user3
user: minikubeHow I can handle this problem with kubernetes api call or in yaml files ?
That is my example service account yaml file .apiVersion: v1
kind: ServiceAccount
metadata:
name: devkind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: dev
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: dev
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: dev
subjects:
- kind: ServiceAccount
name: dev
|
Kubernetes Cluster Context with Multiple Namespaces
|
Simply this, usingsed:find . 2>&1 | sed 's/^[^:]*: .\(.*\).: Permission denied/\1/p;d'or by usingbashonly,As your question stand forbash:string=$'find: `modules/bx/motif\047: Permission denied'
echo $string
find: `modules/bx/motif': Permission denied
part=${string#*\`}
echo ${part%\'*}
modules/bx/motif
|
How can I getmodules/bx/motifonly on the following through pipeline?$ find . | grep denied
find: `modules/bx/motif': Permission denied
|
How do I split a string on a delimiter ` in Bash?
|
TheClassic RISC pipeline wiki articleis very good. Check it out if you haven't.Why the 2nd instruction can start D in C2? D includes reg-read, but the previous instruction doesn't write back to R3 until C7.I'm not sure, I haven't spent a lot of time on the classic-RISC pipeline. Based on what we see for this and ADDI, it looks like register-read happens in the E stage.That perfectly explains E stalling until the previous load's write-back. If you're sure reg-read is supposed to happen in the D stage for the pipeline you're studying, then this solution doesn't match your pipeline; it's correct for a different pipeline that doesn't read registers until Execute.3rd inst's D start at C7, and E start at C11?The D stage of the pipeline is occupied by the previous instruction until C7, at which point it can decode.R1 isn't ready until cycle 11, at which pointthe data can be forwardedfrom the memory stage of the previous instruction, so the ADDI's Execute can happen in parallel with Writeback in the previous instruction. This is called a "bypass".A bypass can let ALU operations run with 1 cycle latency, so you could use the output of an ADD in the next instruction without a stall.Why 4th inst must start at C7 instead of C4?Because the previous instruction is stalled in the fetch stage, and it's an in-order pipeline; no out-of-order execution.
|
First of all, sorry for my poor English. The question is a problem in the textbook for my Computer Architecture course. I've found the answer on the net but still cannot find out the details.The following is the phase of instructions in a five-stage (fetch, decode, execute, memory, write) single-pipeline microarchitecturewithout forwardingmechanism.All operations are one cycle except LW and SW are 1 + 2, and Branch is 1 + 1.Loop: C1 C2 C3 C4 C5 C6 C7 C8 C9 C10 C11 C12 C13 C14 ...
LW R3, 0(R0) F D E M - - W
LW R1, 0(R3) F D - - - E M - - W
ADDI R1, R1, #1 F - - - D - - - E M W
SUB R4, R3, R2 F - - - D E M W
SW R1, 0(R3) F D W M ...
BNZ R4, Loop F D E ...
...I have several questions:Why can the 2nd instruction start D in C2? As I have learned, D-stage include "register read", but the previous instruction doesn't write back to R3 until C7.Similar to previous one, what are the reasons that cause the 3rd inst's D to start at C7, and E to start at C11?Why must the 4th inst start at C7 instead of C4?This problem originate from the book "Computer Architecture : A Quantitative Approach 5e", example 3.11.
|
Computer Architecture pipeline stalls
|
That's quite simple to achieve withIO.popenhandler = IO.popen("bash","w+")
handler.puts("whoami")
puts handler.gets
handler.puts("date")
puts handler.gets
handler.closeOutput:wrodevlopot:tmp lopot$ ruby test.rb
lopot
Sat Oct 1 21:57:42 CEST 2016IO.popen returns an IO handler, mind you that we are opening the subprocess withw+which means read and write. In the example above we are opening a bash process sending the commandwhoami, then we read from it and print, same for commanddate, Once we're done with the subprocess we call close.
|
Closed. This question needs to be morefocused. It is not currently accepting answers.Want to improve this question?Update the question so it focuses on one problem only byediting this post.Closed7 years ago.Improve this questionI want to write a simple automation tool in Ruby that is supposed to wrap a command-line program.The command-line program works somewhat like a REPL: Commands can be entered through STDIN, and output is returned to STDOUT. The Ruby tool therefore needs to be able to read input from the program's STDOUT, as well as return new commands to the programs STDIN.How can such a cyclic input/output be set up?
I am not entirely sure if this question relates to Ruby, or if it relates more to how streams can be connected in Unix-systems in general.
|
How to write a program that wraps both STDOUT and STDIN of another program? [closed]
|
Assuming a fairly standard pipeline, that WAW hazard doesn't exist, it might look sort of hazardy in the program code (in the sense that there are multiple writes to the same register), but there is no mechanism by which the ADD can complete before (or during) the LW (that would mean it calculated the result before the input was available). The SW doesn't write to a register so it doesn't matter, but the ADD can't complete before it either. Actually WAW hazards don't exist at all in the standard pipeline because instructions simply write back in order.Your solution for the RAW hazard assumes there is WB->EX forwarding, which judging by their solution, there isn't. Without forwarders, the soonest you can use a result is when the ID of reading instruction lines up with the WB of the writing instruction.Why (WB) and (EX) are not executed in one cycle?Because it doesn't work. It doesn't work in questionaeither so I'm not sure what happened there. The premise of the question is that there is no forwarding to EX, so same as before, the soonest you can use a value after it is produced is when you line the ID of the reading instruction up with the WB of the writing instruction. EX just reads its inputs from the ID/EX pipeline register.Also, for (a), I don't see any WAR on $6 from I1 to I3. Do you??No, since neither I1 nor I3 modify $6, it's impossible to have any hazard. RAR is not a hazard.
|
Regarding the MIPS assembly language which is thought in the pattersson's book, I have a question with inserting NOP between instructions to avoid pipeline stalls.Consider the following codelw $s5, -16($s5)
sw $s5, -16($s5)
add $s5, $s5, $s5We see there is a RAW hazard for $s5 betweenlwandsw. There is also a WAW hazard for $s5 betwweenswandadd. So we have to insert two NOPs to avoid stalls. In other words, the pipeline diagram islw IF ID EX MEM WB
sw IF ID --- EX MEM WB
add IF ID EX MEM -- WBWhenswis going to be executed, it has be wait forlwto put the data in a register. Therefore, there is one bubble. Also, whenaddwants to write the final result, it has to wait for the completion of the previous instruction (sw). That is another bubble.So the modified code islw
NOP
sw
NOP
addBut the solution propose the following codelw
NOP
NOP
sw
addWhich one is correct? I think mine!
|
MIPS language to avoid pipeline stalls
|
Before the line that says:if test $E -ge $Itemporarily place the line:echo "[$E]"and you'll find something very much non-numeric, and that's because the output ofdf -klooks like this:Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sdb1 954316620 212723892 693109608 24% /
udev 10240 0 10240 0% /dev
: :The offending line there is the first, which will have its fifth fieldUse%turned intoUse, which is definitelynotan integer.A quick fix may be to change your usage to something like:df -k | sed -n '2,$p' | ./filter -c 50or:df -k | tail -n+2 | ./filter -c 50Either of those extra filters (sedortail) will print only from line 2 onwards.If you're open to not needing a special script at all, you could probably just get away with something like:df -k | awk -vlimit=40 '$5+0>=limit&&NR>1{print $5" "$6}'The way it works is to only operate on lines where both:the fifth field, converted to a number, is at least equal to the limit passed in with-v; andthe record number (line) is two or greater.Then it simply outputs the relevant information for those matching lines.This particular example outputs the file systemandusage (as a percentage like42%) but, if you just want the file system as per your script, just change theprintto output$6on its own:{print $6}.Alternatively, if youdothe percentage but without the%, you can use the same method I used in the conditional:{print $5+0" "$6}.
|
In the sections below, you'll see the shell script I am trying to run on a UNIX machine, along with a transcript.When I run this program, it gives the expected output but it also gives an error shown in the transcript. What could be the problem and how can I fix it?First, the script:#!/usr/bin/bash
while read A B C D E F
do
E=`echo $E | cut -f 1 -d "%"`
if test $# -eq 2
then
I=`echo $2`
else
I=90
fi
if test $E -ge $I
then
echo $F
fi
doneAnd the transcript of running it:$ df -k | ./filter.sh -c 50
./filter.sh: line 12: test: capacity: integer expression expected
/etc/svc/volatile
/var/run
/home/ug
/home/pg
/home/staff/t
/packages/turnin
$ _
|
Bash error: Integer expression expected
|
these processors are used when you upload files into Sitecore Media Library.CheckPermissions processor is checking permisions for the folder where you upload files. If you don't have permission is aborting the upload.CheckSize processor is checking if the size of every file uploaded is greater than Media.MaxSizeInDatabase value from web.config.Other 3 processor is resolving the folder where you upload files, add media items and attached the file you upload to media item.
|
Looking through the web.config of a Sitecore project that we have I can see that there is a Pipeline in the<uiUpload>section of the code which is called CheckSize. I am hoping that I can use this to check the size of an item that is being uploaded to Sitecore in order to open a dialog to warn the user of the possible impact of publishing a large file to the site and offer them the opportunity to either back out of the publish or continue.Does anyone here know what this pipeline does and if I can alter it to perform the checks I have listed above?<uiUpload>
<processor mode="on" type="Sitecore.Pipelines.Upload.CheckPermissions, Sitecore.Kernel" />
<processor mode="on" type="Sitecore.Pipelines.Upload.CheckSize, Sitecore.Kernel" />
<processor mode="on" type="Sitecore.Pipelines.Upload.ResolveFolder, Sitecore.Kernel" />
<processor mode="on" type="Sitecore.Pipelines.Upload.Save, Sitecore.Kernel" />
<processor mode="on" type="Sitecore.Pipelines.Upload.Done, Sitecore.Kernel" />
</uiUpload>
|
What does the Sitecore CheckSize Pipeline do?
|
You just need to add the-terseoption to the firstqsubso that it only displays the jobid rather than the whole string.JID=`qsub -terse -cwd touch a.txt`
|
Suppose I want to write a pipeline of tasks to submit to Sun/Oracle Grid Engine.qsub -cwd touch a.txt
qsub -cwd -hold_jid touch wc -l a.txtNow, this will run the 2nd job (wc) only after the first job (touch) is done. However, if a previous job with the name touch had run earlier, the 2nd job won't be held since the condition is already satisfied. I need the jobid of the first job.I triedmyjid=`qsub -cwd touch a.txt`But it gave$ echo $myjid
Your job 1062487 ("touch") has been submitted
|
Get SGE jobid to make a pipeline
|
I'm not actually sure what you are trying to do in this pipeline, but something seems very wrong. It is possible that I completely misunderstood what you were trying to do, so in this case please elaborate more on the details of your implementation.In the meantime, here are some things that could be problematic:You should inherit from the ImagesPipeline, if your goal is to alter the default behaviour of this pipeline. You should also make sure your pipeline is enabled in thesettings.py.The methodprocess_item()should return an Item() object or raise aDropItem()exception, but you are returning a string? And to make it worse, it is a string created by implicitly casting an item object to string? This makes no sense in this context. Even less if you consider you should not override that method in the ImagesPipeline.You have no implementation ifitem_completed(), which is the method called when all image requests for a single item have completed (either finished downloading, or failed for some reason). From there, you can see the path the image has been downloaded too, and move it if necessary.Please read the official documentation forDownloading Item imageson the official documentation for further clarification.
|
EDIT:This is not duplicate of older version of scrapy . Scrapy has changed recently in years and current version is 0.24Scrapy has evolved dramatically over the few years of development. Most of the answer of stackoverflow regarding scrapy are outdated.I am using scrapy 0.24.4 and want to download images in a separate manner for each link. Right now, using scrapy documentation, I am able to download image but they only reside in only one folder.I am using the below code, so it gets saved in separate folder as per each url, but unable to achieve it. This code don't even run , it resides in pipelines.py . Only the default behavior of images pipeline gets executed i.e it downloads every url in item['image_urls'] .pipelines.pyimport scrapy
from scrapy.contrib.pipeline.images import ImagesPipeline
from scrapy.exceptions import DropItem
import urlparse
import urllib
class RecursiveScrapPipeline(object):
"""Custom Image to save in Structured folder """
def process_item(self, item, spider):
#item currently is image name
image_guid = item
return "%s/full/%s.jpg"% (id,image_guid)
#this should work , exactly as per documentation
def get_media_requests(self, item, info):
for image_url in item['image_urls']:
yield scrapy.Request(image_url,meta={'id':item['Property_name']})Am I on correct track? What could possibly be the solution ?
|
structured image download in scrapy pipeline
|
You can check by seeing if its set when beginprocessing is called or if it only is set during process record. Non pipeline properties are set before begin processing is called.
|
I am creating PowerShell cmdlets in C# by extending the PSCmdlet class.
I need to use the same parameter for pipeline input and normal parameter input. Eg[Parameter(Mandatory = true, ValueFromPipeline = true, ValueFromPipelineByPropertyName = true)]
public Object Connection;Here the Connection parameter can take both pipeline input$connectionValue | Cmdlet-Nameand also normal parameter usingCmdlet-Name -Connection $connectionValueIs there a way in C# by which I can find out if the parameter value is pipelined to the cmdlet or provided using -Connection?
In PowerShell this can be done by checking if $input is empty or not. Is there any parameter property that can indicate the input type?
|
C# PowerShell pipeline input flag/property
|
Memory address is decoded at ID stage, and the EXE works with register address, so the DMEM stage is to put data in register to right place.
|
Referring to the wikipedia article:http://en.wikipedia.org/wiki/Classic_RISC_pipelineI am a little unsure what the "memory access" stage actually does. If "execute" actually does the execution, what purpose is there of retrieving memory addresses after the execution has taken place (which is what the wikipedia article suggests)?
|
Classic RISC pipeline- what does "memory access" stage actually do?
|
Now It works!. The solution was to add "format" to capsfilter. Previous caps filter string was:caps = gst.Caps('video/x-raw-yuv,width=640,height=480,framerate=30/1')and now it is:caps = gst.Caps('video/x-raw-yuv,format=(fourcc)I420,width=640,height=480,framerate=30/1')The problem was that my webcam default output pixel format was "YUYV" and the theoraenc element in my fileSink Bin not accepted this format, so addingformat=(fourcc)I420helped.
Still I don't know why the previous capsfilter string worked with gst-launch but I don't mind now.
Thanks for help
|
This works :gst-launch -e v4l2src ! video/x-raw-yuv,width=640,height=480,framerate=30/1 ! tee name=splitter ! queue ! autovideosink splitter. ! queue ! theoraenc ! oggmux ! filesink location=testogg.oggI'm trying to do the same in a dynamic way using python and pygst, the autovideosink branch is always there and after user input I want to attach the filesink.This is the dynamic connection code:fileSink = self.getFileSink()
pad = fileSink.get_static_pad('sink')
pad.set_blocked_async(True, self.padBlockedOnRecordStart, None)
self.player.add(fileSink)
fileSink.set_state(gst.STATE_PLAYING)
self.player.get_by_name('splitter').link(fileSink)
pad.set_blocked_async(False, self.padBlockedOnRecordStart, None)on linking I get this error:Error: GStreamer encountered a general stream error. gstbasesrc.c(2625): gst_base_src_loop (): /GstPipeline:player/GstV4l2Src:video:
streaming task paused, reason not-negotiated (-4)Any ideas?
|
gstreamer dynamic pipeline filesink add, getting not-negotiated error
|
Looks like you are out of luck with the current XProc standard. It states that parameters are name/value pairs where the data type of the valuesmustbe string of untypedAtomic. Don't ask me why..http://www.w3.org/TR/xproc/#parametersIf you won't be composing the contents of your configuration dynamically, but are merely passing around contents of fixed files, you could pass through just a path to the appropriate config file, and use fn:doc() to read it from within the XSLT files.I'd recommend against writing config files on the fly. Execution order within XProc may not be as sequentially as you might expect..Alternative would be to pass through each config setting as a separate parameter, but then each setting would still have to comply to the flat parameter value type..HTH!
|
I'm trying to translate my batch file calling the Saxon (version 8.9) into a XProc pipeline (Calabash).
This is my batch call:java -jar saxon8.jar -o out.xml in.xml style.xsl +config=config-file.cfgThe parameter config is defined in the stylesheet in this way:<xsl:param name="config" as="document-node()"/>The XProc part looks like this:<p:load name="configLoad">
<p:with-option name="href" select="'config-file.cfg'"/>
</p:load>
<p:xslt name="config">
<p:input port="source">
<p:document href="in.xml"/>
</p:input>
<p:input port="parameters">
<p:inline>
<c:param name="config">
<p:pipe port="result" step="configLoad"/>
</c:param>
</p:inline>
</p:input>
<p:input port="stylesheet">
<p:document href="style.xsl"/>
</p:input>
</p:xslt>The error message is this:Required item type of value of variable $config is document-node(); supplied value has item type xs:stringI know the<p:exec>step but i don't want to use it, because the config-file shall be generated by other XSLT tranformations later. It shall also be reused by other XProc steps.Is there a possibility to call the XSLT stylesheet with the correct parameter type?
Thanks for your help!
|
XSLT with XProc - parameter binding in the required type
|
Do you mean you want to write the results into the same file? One after the other? Then use>>instead of>. The>>operator appends to a file instead of overwriting the complete content like>does.In your case, the commands would be like this:python test_1_result.py >> result.txt
python test_2_result.py >> result.txt
|
I'm trying to write the terminal displayed output in a file. Is there anypipe commandto run the following two command at the same timebut sequentially. So basically first it will run the first command and result of first command will be used in by second command.Now I'm running commands one after another.python test_1_result.py > result_1.txt
python test_2_result.py > result_2.txtThanks in advance for any suggestion.
|
Writing the Terminal displayed output in a file
|
So I found that there's a function that is called in every pipelines when the spider closes after it's finished crawling and everything is through the pipeline, which isdef close_spider(self, spider):
passThere's also a function called on startup, which isdef open_spider(self, spider):
pass
|
I'm working on writing a scrapy pipeline that will call a function to clear our cdn's edge servers of the scraped urls. I figured out how to store the list of visited urls easily enough, but the issue is knowing when the crawler is done.The cdn's api takes urls in batches of 100, so I can easily call it's clear function every 100 urls, but if there are 543 urls to crawl the last 43 won't get sent to the cdn's clear function.I've been looking at the scrapy signal documentation, but I can't figure out ifthe spider_closed signal is called when the last request is received or when all item are through the pipeline. If it's the latter, it's too late to know to call the api with the last 43 urlsthe other option would be to add an extension that calls the cdn's api when it receives the spider_closed signal, but how does it know all the urls that the spider has seen? I can build a list of them in the item pipeline, but how to get that to the extension? (I could maybe use the item_scraped signal, which just occurred to me.)So yeah, is there a way to know, inside the pipeline, when the are no more items coming? And are there multiple pipelines running concurrently, or is each pipeline a singleton?
|
Store scrapy items to process after spider completes
|
AFAIK the Xml disassembler always extractsall child nodesof the specifiedbody_xpathelement - it would have been a nice feature to be able to specify Item Xpath instead :(.You can workaround this limitation by either:Creating a schema for the undesirable<BatchID>element, and then just eat instances of it, e.g. creating a subscribing send port using TomasR'sNull Adapteror, Transform the envelope in a map on the receive port, before the envelope is debatched, where the transform strips out the unwantedBatchelementThe following XSLT should do the trick:<?xml version="1.0" encoding="utf-8"?>
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:msxsl="urn:schemas-microsoft-com:xslt" xmlns:var="http://schemas.microsoft.com/BizTalk/2003/var" exclude-result-prefixes="msxsl var" version="1.0" xmlns:ns0="http://BizTalk_Server_Project3.Envelope">
<xsl:output omit-xml-declaration="yes" method="xml" version="1.0" />
<xsl:template match="/">
<xsl:apply-templates select="/ns0:Items" />
</xsl:template>
<xsl:template match="/ns0:Items">
<ns0:Items>
<!--i.e. Copy all Item elements and discard the Batch elements-->
<xsl:copy-of select="ns0:Item"/>
</ns0:Items>
</xsl:template>
</xsl:stylesheet>Seehereon how to convert a btm from a visual map to xslt
|
I'm trying to split an incoming message in the following form:<Items>
<BatchID>123</BatchID>
<Item>...</Item>
<Item>...</Item>
<Item>...</Item>
</Items>I've a pipeline with a XML disassembler which takes the Items schema and outputs the Item schema. On the Items schema, the Envelope property is set totrue, and the "Body XPath" property points to the Items element.However, when I try to process a file, I get the error:Finding the document specification by message type "BatchID" failed. Verify the schema deployed properly., presumably because it's expect only Item elements, and it doesn't know what to do with the BatchID element.If I make the BatchID an attribute on the Items element (like a sensible person would have), everything works fine. Unfortunately, I'm stuck with the layout.At this time, I do not care what is the value of BatchID.How do I get it to split the message?
|
In Biztalk, how do I split an envelope with an extra element?
|
You need to specify it as:find * -perm -a+rNote the dash in front of a.
|
I need to write a command pipeline that will show all non-hidden files that have read permissions for all users.
I dont know why this wouldn't work:find * -perm a=r -printI get no output and am not sure where I am going wrong. Please Help.
|
Finding a File With Specific Permissions
|
You can use MVC Action Filter Types. references inhttp://msdn.microsoft.com/en-us/library/dd410209(v=vs.90).aspx
|
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened,visit the help center.Closed11 years ago.I'm using asp.net MVC3 and I need to save in a database all the things that user make in my application (where is each click, IP, date, request, client info).
where during this processing I can replace the default behavior or inject my own logic?I noticed that there are ASP.NET MVC extensibility points, I don't know where I have all the data that I need to save in datebase.
|
log mvc pipeline [closed]
|
Apache Camels goal is more to be a mediator/routing engine in distributed systems and systems integration. That said, as you notice, it is very lightweight and could easily serve as an engine for parallellized execution of data flows. I don't think you should see camel as a replacement, rather an alternative.
|
I am implementing a parallelized data processing system that involves a bunch of conversions and filters of data as it moves through multiple stages. I recognize theApache Commons Pipelineproject as a good fit for this requirement, but Apache Camel seems to provide a superset of that functionality. Does Camel replace the Commons Pipeline?
|
Does Apache Camel replace Apache commons pipeline
|
For compatibility.MOV LR, PC
LDR PC, =myfuncYou don't want to break all the old code just because the pipeline was changed.
|
what if the stages of pipeline is not 3, such as in ARM1156T2-S (also is ARMv6), it has 9 stages:Fe1 Fe2 De Iss Fe3 Sh ALU Sat WBexthe PC is still address of current instruction plus 8?
|
in ARMv6, why the value of PC is current instruction plus 8?
|
You are directing Gitlab to look for xml files in this directory:gosecuri/target/surefire-reports/TEST-*.xmlat the end of your pipeline file here:reports:
junit:
- gosecuri/target/surefire-reports/TEST-*.xmlGitlab is telling you that its working directory isC:\GitLab-Runner\builds\CXWJTzxhV\0\ssc-onyx-qa\ssc-onyx-test-automation. Those are completely different paths, so the Runner cannot find the files that you are pointing it to.The Gitlab-Runner is operating in a different working directory than the one you are working in. It has access to your files, and is able to determine which file you're pointing it to if you userelative filepaths. This would be more intuitive if you were running the Pipeline on gitlab.com, as it is more clear that they are using an isolated environment that is not the same as your computer, but you can imagine it as being the same situation for a local Runner too. The Runner uses an isolated environment on your computer with access to your files at Gitlab.If you insist on using a hard-coded path, you can make it relative toC:\GitLab-Runner\builds\CXWJTzxhV\0\ssc-onyx-qa\ssc-onyx-test-automationand find out where the directories begin to overlap with the ones in your folder, but that will take a lot of "ls" commands throughout your pipeline and is not best practice.
|
I am a beginner and this is the first time I am building a CI/CD pipeline for my maven java selenium project and trying to run the pipeline locally by installing a runner in my local machine.
Everytime I run my pipeline I recieve an error stating that"WARNING: gosecuri/target/surefire-reports/TEST-*.xml: no matching files. Ensure that the artifact path is relative to the working directory (C:\GitLab-Runner\builds\CXWJTzxhV\0\ssc-onyx-qa\ssc-onyx-test-automation)
ERROR: No files to upload" Please can someone help mePipeline code is mentioned below:-stages:
- test
image: "maven:3.8.7"
variables:
MAVEN_OPTS: >-
-Dhttps.protocols=TLSv1.2 -Dmaven.repo.local=$CI_PROJECT_DIR/.m2/repository
-Dorg.slf4j.simpleLogger.showDateTime=true -Djava.awt.headless=true
MAVEN_CLI_OPTS: "--batch-mode --errors --fail-at-end --show-version"
cache:
paths:
- .m2/repository/
- target/
test-job:
stage: test
services:
- selenium/standalone-chrome
script:
- echo "Compiling the code..."
- mvn --version
- mvn "$MAVEN_CLI_OPTS" clean test -Dhost=selenium__standalone-chrome
- Get-Command zip
artifacts:
when: always
paths:
- target/
reports:
junit:
- gosecuri/target/surefire-reports/TEST-*.xml
|
Gitlab CI/CD Pipeline generates error "no matching files. Ensure that the artifact path is relative to the working directory"
|
One option is:db.collection.aggregate([
//{$lookup...}
{$project: {
_id: 0,
date: 1,
exchange_rate: {$getField: {
input: {$first: {$filter: {
input: {$objectToArray: "$exchange_rates"},
cond: {$eq: ["$$this.k", "$code"]}
}}},
field: "v"
}}
}}
])See how it works on theplayground example
|
Given a document with a subdocument from a$lookupaggregation, I want to project a specific field form the subdocument based on the field of the$$ROOTdocument. Example:{
"date": "2023-08-20",
"code": "USD",
"exchange_rates": {
"USD": 1.01,
"EUR": 2.01,
"CNY": 3.01
}
}I want to get:{
"date": "2023-08-20",
"exchange_rate": 1.01
}This subdocument came from a$lookupoperation, so maybe I can put the project (or any other command) directly inside thepipelineargument from this aggregation directly.
|
How to project specific fields from subdocument baed on value of field from same document in MongoDB
|
You can give the date in the wild card file name.These are my files in the source folder:First generate theyyyyMMddformat filename using a set variable activity with the below expression.@formatDateTime(utcnow(),'yyyyMMdd')Then, use this in the copy activity source wild card file name. Give the below expression in the file name.File@{variables('date')}*.TXTGive your target file in sink and this is my result.
|
ok I have to copy from a remote storage account a daily file. this file is generated with format name: File20230515063915.TXT meaning: "the word File"+year+month+date+hour, etc. everything on my time (+5)The thing is, the storage account have several dates, so I need to copy just today date file. (today as local time, not utc) and the part of hour, minutes, seconds, there is no way for me to calculate it as string, since it vary always.My solution was to create a variable that can store up to the day part, and use wildcard (*) for the rest of the file name.but in this point I am confused about the Pipeline expression builder to calculate and where/how to set the wildcard, if on the filepath for the dataset or on the pipeline itself, and how.I really appreciate if somebody could help me on this.an alternative solution could be to list the files on the storage account, detect new files comparing with previus run, and use the new file names only. but I think that will require more work.dynamic content pipeline expression example.. If that fix the issue.
or a way to workaround the issue.
|
wildcard and dynamic filename on datafactory read file
|
As a workaround, you can replace$_by$PSItem.$_is an alias for$PSItem(read more).
|
I want to execute some PowerShell code on Linux via Ansible without outsourcing it to a file. I am using the following multiline string command, which works great in general, but does not work as expected as soon as I try to access the current pipeline object a. k. a.$_:- name: MRE
ansible.builtin.command: |
pwsh -c "& {
1..3 | Foreach-Object {
$_
}
exit 1
}"The actual output is:fatal: [host.domain.tld]: FAILED! => changed=true
cmd:
- pwsh
- -c
- |-
& {
1..3 | Foreach-Object {
$_
}
exit 1
}
delta: '0:00:00.639590'
end: '2023-03-30 14:48:53.343005'
msg: non-zero return code
rc: 1
start: '2023-03-30 14:48:52.703415'
stderr: ''
stderr_lines: <omitted>
stdout: ''
stdout_lines: <omitted>As you can see,stdoutis empty, but should contain the numbers 1 to 3. If$_is not just a number, but an object and I try to access an attribute, it gets translated to/usr/bin/python3.attribute. So maybebashorpythonare replacing$_or_before it reaches PowerShell.I then tried to escape$_like\$_,$\_and\$\_, but without success. How can I use$_in this example?
|
How can I use $_ in a PowerShell scriptblock in Ansible?
|
I think you can accomplish this by just usingNoneas the value in the dict:scoring = {
'pipe_score': None,
'my_score': my_scoring_func,
}To address your attempts a bit more (moving from comments):Always when debuggingNaNscores in a hyperparameter search, seterror_score="raise"so that you get the error traceback.make_scorertakes ametricwith signature(y_true, y_pred)and turns it into ascorerwith signature(estimator, X_test, y_test). Sincepipe.scoreis already a scorer (withselfas the estimator), you don't need this convenience function.That won't fix things though:pipe.scoreis the method of theinstancepipe, and so in the search you'll be trying to call it with the same instance ofpipe, not the fitted versions that are being created during the search: you'll get all the same scores, or an error.
|
I am using a pipeline in a hyperparameter gridsearch in sklearn. I would like the search to return multiple evaluation scores - one a custom scoring function that I wrote, and the other the default score function of the pipeline.I tried using the parameterscoring={'pipe_score': make_scorer(pipe.score), 'my_score' : my_scoring_func}in myGridSearchCVinstance (pipeis the name of my pipeline variable) but this returnsnanforpipe_score.What is a correct way to do this?
|
Return pipeline score as one of multiple evaluation metrics
|
Theshcommand is using the Groovy installation of your Jenkins Agent.-cpargument specifies theclasspath, this is where your additional dependencies will reside. For example, ifBuildReport.groovyrequires additional dependencies you can point to a directory where the additional dependencies are located. The following is from the groovy man pages.-cp, -classpath, --classpath=<path>
Specify where to find the class files - must be
first argumentHaving said that, in your case, if you don't have any dependent Classes, specifying the classpath would be redundant.
|
I am going through a jenkins pipeline someone else did for my organization. I saw inside that jenkins pipeline it has called to a external groovy script but I have no idea about that process.
This is how it has been called.sh "groovy -cp /apps/scripts /apps/scripts/BuildReport.groovy ${env.BUILD_URL} ${env.BUILD_ID}"I know${env.BUILD_URL} ${env.BUILD_ID}are the arguments that has been passed to the groovy script. But what is the meaning ofgroovy -cp?and why/apps/scriptshas mentioned two times?can someone clear please..? Thanks in advance..!
|
How to call external groovy script from jenkins pipeline
|
db.collection.aggregate([
{
"$project": {
"test": {
"$let": {
"vars": {
"sum": {
$add: [
"$a",
"$b"
]
},
"d": 3
},
"in": {
"c": "$$sum",
"d": {
"$multiply": [
"$$sum",
"$b"
]
}
}
}
}
}
}
])Explained:Set the sum via $let and use it later as variable to calculate the multiplication.Playground
|
I am new working with mongodb. I am trying to use calculated fields in the $project stage, to resolve other fields.I show you a simplified example.input[
{
a: 5,
b: 3
},
{
a: 2,
b: 1
},
]Codedb.collection.aggregate([
{
$project: {
_id: 0,
c: {
"$add": [
"$a",
"$b"
]
},
d: {
"$multiply": [
"$a",
"$c"
]
}
}
}
])Output[
{
"c": 8,
"d": null
},
{
"c": 3,
"d": null
}
]resultI only get null values, I have tried to solve it using $let without results.
A simple way to solve it would be to replicate the $add operation, but if it is a complex calculation like the one I am dealing with in my real project and it is replicated many times, as is the case, it could be performing unnecessary operations.Help to me pleasesample playgrond
|
Calculated fields in the same $project stage of the pipeline aggregate mongodb?
|
To create a relationship between two separate dags, airflow providesExternalTaskSensorto add a dependency to a task in another dag. When you use this sensor, you can access the graph of DAG dependencies available on<AIRFLOW_SERVER_URI>/dag-dependencies.Also, Airflow provide an experimental lineage support based on OpenLineage, which allows you to use an external lineage platforms (ex: Marquez). By using a lineage tool, you can define the dependencies between the different airfow dags to create bigger business/functional dags, and track the success or failure of the whole pipeline.
|
In Airflow most pipeline consists of a group of DAGs each DAG contains tightly related tasks.
Does Airflow have some construct/object which can group DAGs together? So that one can track the success or failure of the whole pipeline?Something like in the picture:
|
Airflow grouping DAGs by common id or params
|
Usingif: ${{ contains(github.ref, 'bug_fix') || github.ref == 'refs/heads/fix_bug' }}resolved the issue, but anyone know to tell why I should the prefixrefs/heads?Tnx
|
I have a GitHub pipeline.I want to run a step conditionally.I'm running onfix_bugbranchMy code:- name: Run Step
if: ${{ contains(github.ref, 'bug_fix') || github.ref == 'fix_bug' }}
run: |Excepted result:Should run the step (or enter into the step).Actual result:Doesn't enter into the step.The step has the didn't run sign.
|
GitHub pipeline run conditional step isn't working
|
Pipelines doesn't currently have afinallyconstruct like this. Any step failure will stop the pipeline immediately.This might be added in the future, but the best way to accomplish this now would be an EventBridge rule on pipeline status change that triggers a Lambda, SageMaker Pipeline, etc which has your failure logic.
|
Is there a way to add an end step to a sagemaker pipeline that still runs at the very end (and runs code) even if other previous steps fail. Before I thought we could make it aFail Stepbut that only lets you return an error message and doesn’t let you run code. If we made it a conditional step how would we make sure it ran at the very end without depending on any previous steps. I thought of adding all previous steps as a dependency so it runs at the end, but then the end step wouldn't run if any step before that failed.I tried using the fail step, but I can't provide code. I tried putting it with dependencies but then it won't run if other steps fail before it. I tried putting no dependencies, but then it won't run at the end.
|
How can I add a final step to a Sagemaker Pipeline that runs even if other steps fail
|
You need to provide an emptyparamsvariable in your task, for example:from airflow.decorators import dag, task
from datetime import datetime
default_params = {"start_date": "2022-01-01", "end_date": "2022-12-01"}
@dag(
schedule=None,
start_date=datetime(2021, 1, 1),
catchup=False,
tags=['using_params'],
params=default_params
)
def mydag():
@task
def extract(params={}):
import helper
filenames = helper.extract(start=params.get("start_date"))
return filenames
extract()
_dag = mydag()Now in the UI when youTrigger DAG w/ configyou should be able to see and change the default params. And be able to access it in your dag task.
|
I try to use configs in dag using "trigger w/config".def execute(**kwargs):
dag_run = kwargs['dag_run']
start_date = dag_run.conf['start_dt'] if 'start_dt' in dag_run.conf.keys() else kwargs['start_dt']
end_date = dag_run.conf['end_dt'] if 'end_dt' in dag_run.conf.keys() else kwargs['end_dt']
print(f'start_date = {start_date}, end_date = {end_date}')
dag = DAG(
"corp_dev_ods_test_dag",
default_args=default_args,
description='DAG',
schedule_interval='10 1 * * *',
start_date=days_ago(0),
#params={'dt' : '{{ macros.ds_add(ds, -7) }}'},
catchup=False,
tags=['dev']
)
run_submit = PythonVirtualenvOperator(
task_id='run_submit',
requirements=dag_requirements,
python_callable=execute,
system_site_packages=False,
dag=dag,
op_kwargs={'start_dt' : '{{ macros.ds_add(ds, -7) }}', 'end_dt': '{{ macros.ds_add(ds, -7) }}'}
)
run_submitI got "KeyError": kwargs["dag_run"]. But in case of PythonOperator (Instead of PythonVirtualenvOperator) it works.So, how can I use such parameters in my dag?
|
Airflow trigger dag with config
|
It looks like you did not predict the values forX_testwith yourknn_pipe. The variableknnthat you use in your last line is actually undefined in the example you provide. I guess you have defined it somewhere in the original and thus see this error message.Anyway, just changepreds = knn.predict(X_test)topreds = knn_pipe.predict(X_test)and it will work.
|
I have this pipeline:diamonds = sns.load_dataset("diamonds")
# Build feature/target arrays
X, y = diamonds.drop("cut", axis=1), diamonds["cut"]
# Set up the colnames
to_scale = ["depth", "table", "x", "y", "z"]
to_log = ["price", "carat"]
categorical = X.select_dtypes(include="category").columns
scale_pipe = make_pipeline(StandardScaler())
log_pipe = make_pipeline(PowerTransformer())
categorical_pipe = make_pipeline(OneHotEncoder(sparse=False))
transformer = ColumnTransformer(
transformers=[
("scale", scale_pipe, to_scale),
("log_transform", log_pipe, to_log),
("oh_encode", categorical_pipe, categorical),
]
)
knn_pipe = Pipeline([("prep", transformer), ("knn", KNeighborsClassifier())])
# Fit/predict/score
_ = knn_pipe.fit(X_train, y_train)
preds = knn.predict(X_test)When I run it, it is fitting to the data perfectly fine but I can't score or make predictions because I am getting this error:ValueError: could not convert string to float: 'G'
The above exception was the direct cause of the following exception:
ValueError: Unable to convert array of bytes/strings into decimal numbers with dtype='numeric'It is a classification problem, so I thought the reason for the error was because I didn't encode the target. But even after using LabelEncode on the target, I am still getting the same error.
What might be the reason? I tried the pipeline with other models too. The error is the same. BTW, I am using the built-in Diamonds dataset of Seaborn.
|
ValueError: Unable to convert array of bytes/strings into decimal numbers with dtype='numeric'
|
You can do it usingyamlinstead ofyamlFilepipeline {
agent {
kubernetes {
yaml """
kind: Pod
spec:
containers:
- name: example-image
image: example/image:${params.AGENT_POD_SPEC}
imagePullPolicy: Always
command:
- cat
"""
}
}
parameters {
choice(
name: 'AGENT_POD_SPEC',
choices: ['1.5.0','1.3.0','1.2.0','1.4.0'],
description: 'Agent pod configuration'
)
}
}
|
I have a shared yaml file for multiple pipelines and I would like to parameterize the tag of one of the images in the yaml file.What would be the simplest way to do this? At the moment I am maintaining multipleKubernetesPods.yaml, such asKubernetesPods-1.5.0.yamland interpolating the parameter into the name (yamlFile "KubernetesPods-${params.AGENT_POD_SPEC}.yaml"), but this does not scale well.Can I get parameters into the yaml without having to have the yaml written out in every pipeline?Example pipeline:pipeline {
agent {
kubernetes {
yamlFile 'KubernetesPods.yaml'
}
}
parameters {
choice(
name: 'AGENT_POD_SPEC',
choices: ['1.5.0','1.3.0','1.2.0','1.4.0'],
description: 'Agent pod configuration'
)
}
}Example KubernetesPods.yaml:kind: Pod
spec:
containers:
- name: example-image
image: example/image:<IMAGE-TAG-I-WANT-TO-PARAMETERIZE>
imagePullPolicy: Always
command:
- cat
|
Using Parameters in yamlFile of kubernetes agent in jenkins decorative pipeline
|
One way to make one pipeline execute after another one is via a sensor. The recommended way to do this in Dagster is with an "asset sensor". A solid in the first pipeline yields anAssetMaterialization, and the sensor in the second pipeline waits for that asset to be materialized.Here's an example:https://docs.dagster.io/concepts/partitions-schedules-sensors/sensors#asset-sensors
|
How am I supposed to start a pipeline B after pipeline A completes, and use pipeline A's outputs into pipeline B?A piece of code as a starting point:from dagster import InputDefinition, Nothing, OutputDefinition, pipeline, solid
@solid
def pipeline1_task1(context) -> Nothing:
context.log.info('in pipeline 1 task 1')
@solid(input_defs=[InputDefinition("start", Nothing)],
output_defs=[OutputDefinition(str, 'some_str')])
def pipeline1_task2(context) -> str:
context.log.info('in pipeline 1 task 2')
return 'my cool output'
@pipeline
def pipeline1():
pipeline1_task2(pipeline1_task1())
@solid(input_defs=[InputDefinition("print_str", str)])
def pipeline2_task1(context, print_str) -> Nothing:
context.log.info('in pipeline 2 task 1' + print_str)
@solid(input_defs=[InputDefinition("start", Nothing)])
def pipeline2_task2(context) -> Nothing:
context.log.info('in pipeline 2 task 2')
@pipeline
def pipeline2():
pipeline2_task2(pipeline2_task1())
if __name__ == '__main__':
# run pipeline 1
# store outputs
# call pipeline 2 using the above outputsHere we have three pipelines:pipeline1has two solids, possibly does whatever stuff we wish and returns output from the second solid.pipeline2is supposed to use the output ofpipeline1_task2, eventually do another piece of work and print the output of the first pipeline.How am I supposed to "connect" the two pipelines?
|
Dagster start pipeline from another pipeline using its outputs
|
In your E2E project, the one that receives the trigger, you can tell a job to only run when the pipeline source is a trigger using therulessyntax:build-from-trigger:
stage: build
when: never
rules:
- if: "$CI_COMMIT_REF_NAME == 'master' && $CI_PIPELINE_SOURCE == 'trigger'
when: always
script:
- ./build.sh # this is just an example, you'd run whatever you normally would hereThe firstwhenstatement,when: neversets the default for the job. By default, this job will never run. Then using therulesyntax, we set a condition that will allow the job to run. If theCI_COMMIT_REF_NAMEvariable (the branch or tag name) ismasterAND theCI_PIPELINE_SOURCEvariable (whatever kicked off this pipeline) is atrigger, then we run this job.You can read about thewhenkeyword here:https://docs.gitlab.com/ee/ci/yaml/#when, and you can read therulesdocumentation here:https://docs.gitlab.com/ee/ci/yaml/#rules
|
I have an A project and an E2E project. I want to deploy A project trigger E2E pipeline run test but I just want the trigger test stage. we don't need trigger E2E to build deploy ...etce2e_tests:
stage: test
trigger:
project: project/E2E
branch: master
strategy: depend
stage: testI have tried to use the stage in config. but got error unknown keys: stagehave any suggestions?
|
Gitlab CI can trigger other project pipeline stage?
|
I spot my error, I was switching the instantiation of the classes: the custom transformers have to be instantiated inside theColumnTransformer, while theColumnTransformerdoes not have to be instantiated inside the pipeline.The correct code is the following:transformation_pipeline = ColumnTransformer([
('adoption', TransformAdoptionFeatures(), features_adoption),
('census', TransformCensusFeaturesRegr(), features_census),
('climate', TransformClimateFeatures(), features_climate),
('soil', TransformSoilFeatures(), features_soil),
('economic', TransformEconomicFeatures(), features_economic)
],
remainder='drop')
full_pipeline_stand = Pipeline([
('transformation', transformation_pipeline),
('scaling', StandardScaler())
])
|
I'm building a pipeline in scikit-learn. I have to do different transformations with different features, and then standardize them all. So I built aColumnTransformerwith a custom transformer for each set of columns:transformation_pipeline = ColumnTransformer([
('adoption', TransformAdoptionFeatures, features_adoption),
('census', TransformCensusFeaturesRegr, features_census),
('climate', TransformClimateFeatures, features_climate),
('soil', TransformSoilFeatures, features_soil),
('economic', TransformEconomicFeatures, features_economic)
],
remainder='drop')Then, since I'd like to create two different pipelines to both standardize and normalize my features, I was thinking of combiningtransformation_pipelineand the scaler in one pipeline:full_pipeline_stand = Pipeline([
('transformation', transformation_pipeline()),
('scaling', StandardScaler())
])However, I get the following error:TypeError: 'ColumnTransformer' object is not callableIs there a way to do this without building a separate pipeline for each set of columns (combining the custom transformer and the scaler)? That is obviously working but is just looks like useless repetition to me... Thanks!
|
ColumnTransformer inside a Pipeline
|
You just have to feed it as a dictionary.Try this example:from sklearn.preprocessing import MinMaxScaler, PolynomialFeatures
from sklearn.linear_model import Ridge
from sklearn.pipeline import make_pipeline
from sklearn.datasets import make_regression
from sklearn.model_selection import GridSearchCV
X, y = make_regression(random_state=42)
pipe = make_pipeline(MinMaxScaler(), PolynomialFeatures(), Ridge())
pipe
# Pipeline(steps=[('minmaxscaler', MinMaxScaler()),
# ('polynomialfeatures', PolynomialFeatures()),
# ('ridge', Ridge())])
gs = GridSearchCV(pipe, param_grid={'polynomialfeatures__degree': [2,4],
'ridge__alpha': [1,10]}).fit(X, y)
# gs.best_params_
# {'polynomialfeatures__degree': 2, 'ridge__alpha': 1}
|
I had a general doubt for Cross Validation.In the notebook for module 2 it is mentioned that one should use pipelines for Cross Validation in order to prevent data leakage. I understand why , however had a doubt regarding the pipeline function:If I want to use three functions in a pipeline :MinMaxScaler(),PolynomialFeatures(for multiple degrees) and ARidgein the end(for multiple alpha values). Since I want to find the best model after using multiple param values , I will use theGridSearchCV()function which does cross validation and gives the best model score.However after I intialise a pipeline object with the three functions and insert it in theGridSearchCV()function , how do I insert the multiple degrees and aplha values in theparamsparameter of theGridSearchCV()function . Do I insert the params as a list of two lists in the order of which the functions have been defined in the pipeline object or do I send a dictionary of two lists, where the keys are the object names of the functions in the pipeline ?????
|
pipeline and cross validation in python using scikit learn
|
It's possible yes. If I take your example, in job "X" from pipeline 1, you can trigger a pipeline from another project usingGitlab API:script:
- "curl --request POST --form token=TOKEN --form ref=master https://gitlab.example.com/api/v4/projects/9/trigger/pipeline"To ensure triggering only job "X" from pipeline 2, add useonlykeyword withapicondition:job_X:
only:
- api
|
Example:I have 3 different pipelines (projects) in GitLab.
Each pipeline has multiple jobs, each targeting a different remote VM and set a different GitLab CI environment.
The jobs are all manually triggered (currently).
What I am trying to achieve is a linking (multi-project) pipeline that runs like this:
Once I trigger job "X" in the pipeline #1, upon succeeding, that will triggerONLYjob "X" in the pipeline #2 which then again, upon success, will triggerONLYjob "X" in pipeline #3.By job "X" I mean a job that runs on a certain remote VM, I don't want the entire pipeline to run since I don't want to change all targets. All examples I found only work at a pipeline level, not at the job level. What am I missing?PS: I'm new to the GitLab CI scene so please forgive my lack of understanding in case there's an easy solution that I've missed.
|
How to run job-to-job linking in multi-project pipelines on GitLab CI
|
Targets:
- RoleArn: ...
Arn: !Ref StepFunction
Id: "ScheduleStepFunction"
InputTransformer:
InputPathsMap:
Bucket: !Ref Bucket
InputTemplate: "{\"Bucket\": <Bucket>}"It seems like yourTargetshave some issue because there is noInputis mention in your target as suggested by the Amazon eventRule documentation. Go to the provided link to use Input in target as suggested by AWS UserGuide.https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-events-rule-target.html
|
Parameters:
Bucket:
Default: bucket_1
Type: String
AllowedValues:
- bucket_1
- bucket_2
Resources:
StepFunction:
Type: 'AWS::StepFunctions::StateMachine'
Properties:
DefinitionString: |-
{
...
}
RoleArn: ...
StateMachineName: ...
EventRule:
Type: 'AWS::Events::Rule'
Properties:
ScheduleExpression: cron(...)
Targets:
- RoleArn: ...
Arn: !Ref StepFunction
Id: "ScheduleStepFunction"
InputTransformer:
InputPathsMap:
Bucket: !Ref Bucket
InputTemplate: "{\"Bucket\": <Bucket>}"I am trying to pass input parameter namedBucket, defined as a parameter in the CloudFormation Template itself, to a StepFunction Target of the Event Rule.I get this error: InputPath for target ScheduleStepFunction is invalid.Note: The optional parameter 'InputPath' has not been provided also.
|
How to pass input to EventRule for a StepFunction Target in CloudFormation?
|
Try to check the URL in the browser when you are editing the pool or theNetwork tab in the browserwhen you update a pool. They usually contain full ARN.Alternatively, you can get the ARN with AWS CLI, gettingthe project ARNand thenlisting device poolsof the project.
|
Where can I find the Amazon Resource Number (ARN) in my device pool?
|
Where can I find the Amazon Resource Number (ARN) in my device pool?
|
I can give you an idea,For example. Your pipeline1 failed by some reasons. At this time, you can create a file to Azure Storage Blob.(Here is an example, you can use the activities what you want to use.)Then create the trigger2 triggered by a blob been created.
|
I want to automatically Re-trigger a failed pipeline using the If Condition Activity (dynamic content).Process :Pipeline 1 running at a schedule time with trigger 1 - worksIf pipeline 1 fails, scheduled trigger 2 will run pipeline 2 - worksPipeline 2 should contain if condition to check if pipeline 1 failed- This is the IssueIf pipeline 1 failed, then rerun else ignore - need to fix thisHow can this be done?All help is appreciated.Thank you.
|
Azure Data Factory automatically re-trigger failed pipeline
|
Try removing the--watchflag from yourtestscript inpackage.json.I think you don't need to watch in a CI job.
|
Expectedgitlab-ci config runs tests during pipeline and completesResultstests run forever and pipeline never finishesProject is a ReactJS/Jest frontend app:https://gitlab.com/futuratum/moon.holdingsPaused Pipeline:https://gitlab.com/futuratum/moon.holdings/pipelines/72013854build site:
image: node:10
stage: build
script:
- npm install --progress=false
- npm run build
artifacts:
expire_in: 1 week
paths:
- dist
unit test:
image: node:10
stage: test
script:
- npm install --progress=false
- npm run test
|
gitlab-ci config, pipeline gets stuck on unit test
|
Is it possible to have an Azure hosted build agent persist between
pipeline stages?No, you can't. The hosted agent are all randomly assigned by server. You could not use any script or command to specify a specific one.Since you said that the Build_Stage will create some resources externally, so that you want to executeclean upto clean it.In fact, for this, you can execute thisclean upcommand as the last steps in Build_Stage. If this, whether using hosted or private agent will not affect what you want.
|
I have a pipeline with 2 stages - a build/test stage, and a Teardown stage that cleans up external resources after the build/test stage. The teardown stage depends on some state information that gets generated in the build/test stage. I'm trying to use Azure hosted agents to do this. The problem is that the way I have it now, each stage deploys a new agent, so I lose the state I need for the teardown stage.My pipeline looks something like this:trigger:
- master
stages:
- stage: Build_stage
jobs:
- job: Build_job
pool:
vmImage: 'ubuntu-latest'
steps:
- task: InstallSomeTool@
- script: invoke someTool
- script: run some test
- stage: Teardown_stage
condition: always()
jobs:
- job: Teardown_job
pool:
vmImage: 'ubuntu-latest'
steps:
- script: invoke SomeTool --cleanupThe teardown stage fails because it's a brand new agent that knows nothing about the state created by the previous invoke someTool script.I'm trying to do it this way because the Build stage creates some resources externally that I want to be cleaned up every time, even if the Build stage fails.
|
Is it possible to have an Azure hosted build agent persist between pipeline stages
|
I think you are on the right track. Do you get any error? Refer thisnotebookfor instantiating the model from the tuner and use in inference pipeline.Editing previous response based on the comment. To create model from the best training job of the hyperparameter tuning job, you can use below snippetfrom sagemaker.tuner import HyperparameterTuner
from sagemaker.estimator import Estimator
from sagemaker.model import Model
# Attach to an existing hyperparameter tuning job.
xgb_tuning_job_name = 'my_xgb_hpo_tuning_job_name'
xgb_tuner = HyperparameterTuner.attach(xgb_tuning_job_name)
# Get the best XGBoost training job name from the HPO job
xgb_best_training_job = xgb_tuner.best_training_job()
print(xgb_best_training_job)
# Attach estimator to the best training job name
xgb_best_estimator = Estimator.attach(xgb_best_training_job)
# Create model to be passed to the inference pipeline
xgb_model = Model(model_data=xgb_best_estimator.model_data,
role=sagemaker.get_execution_role(),
image=xgb_best_estimator.image_name)
|
I'm trying to implement the best estimator from a hyperparameter tuning job into a pipeline object to deploy an endpoint.I've read the docs in a best effort to include the results from the tuning job in the pipeline, but I'm having trouble creating the Model() class object.# This is the hyperparameter tuning job
tuner.fit({'train': s3_train, 'validation': s3_val},
include_cls_metadata=False)
#With a standard Model (Not from the tuner) the process was as follows:
scikit_learn_inferencee_model_name = sklearn_preprocessor.create_model()
xgb_model_name = Model(model_data=xgb_model.model_data, image=xgb_image)
model_name = 'xgb-inference-pipeline-' + timestamp_prefix
endpoint_name = 'xgb-inference-pipeline-ep-' + timestamp_prefix
sm_model = PipelineModel(
name=model_name,
role=role,
models=[
scikit_learn_inferencee_model_name,
xgb_model_name])
sm_model.deploy(initial_instance_count=1, instance_type='ml.c4.xlarge',
endpoint_name=endpoint_name)I would like to be able to cleanly instantiate a model object using my results from the tuning job and pass it into the PipelineModel object. Any guidance is appreciated.
|
Creating a model for use in a pipeline from a hyperparameter tuning job
|
@SergeyBushmanov helped me diagnose the error in my title, it was caused by runningSimpleImputeron text.I have a further error that I'll write a new question for.
|
I'm trying to use SKLearn 0.20.2 to make a pipeline while using the new ColumnTransformer feature. My problem is that I keep getting the error:AttributeError: 'numpy.ndarray' object has no attribute 'lower'I have a column of blobs of text called,text. All of my other columns are numerical in nature. I'm trying to use theCountvectorizerin my pipeline and I think that's where the trouble is. Would much appreciate a hand with this.from sklearn.impute import SimpleImputer
from sklearn.compose import ColumnTransformer
# plus other necessary modules
# mapped to column names from dataframe
numeric_features = ['hasDate', 'iterationCount', 'hasItemNumber', 'isEpic']
numeric_transformer = Pipeline(steps=[
('imputer', SimpleImputer(strategy='median'))
])
# mapped to column names from dataframe
text_features = ['text']
text_transformer = Pipeline(steps=[
('imputer', SimpleImputer(strategy='most_frequent”')),
('vect', CountVectorizer())
])
preprocessor = ColumnTransformer(
transformers=[('num', numeric_transformer, numeric_features),('text', text_transformer, text_features)]
)
clf = Pipeline(steps=[('preprocessor', preprocessor),
('classifier', MultinomialNB())
])
x_train, x_test, y_train, y_test = train_test_split(features, labels, test_size=0.33)
clf.fit(x_train,y_train)
|
SKLearn Pipeline w/ ColumnTransformer: 'numpy.ndarray' object has no attribute 'lower'
|
Well i found the problem after lots of headaches, pipelines was trying to build the angular 7 app with the angular 6 cli, even thought i was explicitly installing @angular/cli 7.0.2. I discovered that pipelines makes a cache of node_modules so it was using the old cli, cleaning the cache made the error go away.
|
I just updated my app to Angular 7, it compiles fine in local but when i try to deploy to bitbucket i get this error with ng buildregistry.registerUriHandler is not a function TypeError:
registry.registerUriHandler is not a function
at Object.runCommand (/opt/atlassian/pipelines/agent/build/node_modules/@angular/cli/models/command-runner.js:47:14)
at default_1 (/opt/atlassian/pipelines/agent/build/node_modules/@angular/cli/lib/cli/index.js:32:54)
at Object. (/usr/local/lib/node_modules/@angular/cli/lib/init.js:125:1)
at Module._compile (module.js:643:30)
at Object.Module._extensions..js (module.js:654:10)
at Module.load (module.js:556:32)
at tryModuleLoad (module.js:499:12)
at Function.Module._load (module.js:491:3)
at Module.require (module.js:587:17)
at require (internal/module.js:11:18)I don't understand what has pipelines to do with this but i think it could be an error with angular, any idea of how to solve this?
|
Angular 7 ng build, registry.registerUriHandler is not a function in bitbucket pipelines
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.