Response
stringlengths 8
2k
| Instruction
stringlengths 18
2k
| Prompt
stringlengths 14
160
|
---|---|---|
This should work:$ find . -type d -name 0,5\ mM -print | while read dir ; do mv -v "${dir}" "${dir/0,5/0,7}" ; done
|
I have a main directory called like "Experiment", where I keep several subfolders like "time_data", then several subfolders "bacterial_strain"and in the end "drug concentration" that is written like "0,5 mM". So the final path is:/Experiment/time_data/bacterial_strain/0,5 mMI want to rename all the directories in all subdirectories from "0,5 mM" to "0,7 mM"I tried to run several different pipelines as:find . -type d | grep 0,5 | mv "0,5 mM/" "0,7 mM/"orgrep -r "0,5 mM" "/path_to_Experiment_folder" | mv 0,5\ mM 0,7\ mMBut Terminal gives the same mistake:mv: rename 0,5 mM to 0,7 mM: No such file or directoryPlease, help me, how to change the pipeline in order to do that?
|
How to rename several directories in all subfolders In Unix terminal
|
No, it is not possible to change the timeout for a running build. Instead of using different stages, you could try using multiple test jobs on your one stage as each job has the timeout of 60 minutes. One possible way to break it down could be one job per test suite.
|
I'm running e2e tests in a stage in my Bluemix DevOps pipeline but it is exceeding the 60 minutes limit:The execution exceeded the time limit of 60 minutes.
One possible solution is to split up your execution.
Finished: ERROREDIs there a way of increasing the stage timeout? I do not want to split my tests across different stages.
|
Timeout when running stage on Bluemix DevOps pipeline
|
Sure. Simply add... | Where-Object { @($_ -split '\\').Count -eq 14 }after theSelect-Object.
|
I'm querying a highly structured file system. I need to look at the nodes that are at the 14th level of the tree. I've come up with the following based on some other posts on querying filesystems and my own research:$lines = Get-ChildItem "\\ad1hfdahp001\D$\software\anthill\var\artifacts" -Recurse -Force -EA SilentlyContinue |
Where-Object { $_ -is [System.IO.DirectoryInfo] } |
Select -ExpandProperty FullName
$paths=@()
foreach ($d in $lines) {
$a = $d -split "\\"
if ($a.count -eq 14) {$paths += $d}
}Is there a way to add the code in theforeachblock (or part of it) to the first statement so that$linesonly contains the paths with 14 levels? I know this is trivial, but I'm processing a huge amount of data I feel as though adding this as a filter to the pipeline in the first statement would be much more efficient than dumping all the directories into an array and then reprocessing the array to select the 14-level entries.
|
Can I collapse this into a single line of code using the pipeline?
|
Instead of reading just one line (msg = gets.chomp) you need to iterate through the lines of stdin$stdin.each_line do |msg|
# ...
endThis does not wait for the entire output to be generated, it will process the stream as lines are printed by the first process (ignoring buffering).For example, with these two scripts# one.rb
i = 0
loop { puts i += 1 }
# two.rb
$stdin.each_line { |msg| puts "#{msg.chomp}!" }The first has infinite output, but you will still see output when you runruby one.rb | ruby two.rb
|
I am pretty sure that all of you know the bash pipelines. What I am trying to do is to read the output from one ruby script and put it as input into other ruby scripthere is what I successfully accomplished so far:first file to generate the output#one.rb
puts "hello"and the second file to handle the input#two.rb
msg = gets.chomp
IO.write(File.dirname(__FILE__)+'\out.txt',msg)now, the cygwin command (or linux) (side question: can this be done as well in windows cmd or powershell?)ruby one.rb | ruby two.rband voila, the out.txt file is created, filled with string hello.However, when I try it to do it in the cycle or handling some stream of data, like 10.times puts 'hello' and the in cycle to read it, it did not work. Can somebody please help me to accomplish this or show me the way how can i do that? I have only find some python question but it was for something different and some bash-like-way.Thank you!
|
ruby script - redirect output from one script to another script
|
use_seriesis just an alias for$. You can see that by typing the function name without the parenthesisuse_series
# .Primitive("$")The$primitive function does not have formal argument names in the same way a user-defined function does. It would be easier to useextract2in this casedf_b <- df_list %>%
llply(.fun = extract2, "b")Note in this case you pass the column name as a character values rather than a symbol. I'velearned beforethat the$is tricky to use the the apply family of functions.
|
I was wondering if anybody knew or could help me find the formal argument names for all of the magrittr alias functions. For example, I know that the argument for 'set_colnames' is 'value'.df <- data.frame(1:3, 4:6, 7:9) %>%
set_colnames(value = c('a', 'b', 'c')Normally, I just pass the arguments in unnamed but lately I've been trying to make my code as robust as possible and it's also helpful when you're trying to use these aliases inside of an apply function (or llply in my case). The problem that I'm having is that I have a list of similar df's and I want to extract the same column from each but still retain the list format.df_list <- list(data.frame('a' = 1:3, 'b' = 4:6),
data.frame('a' = 7:9, 'b' = 10:12))What I would like to do is something likedf_b <- df_list %>%
llply(.fun = use_series, b)But this doesn't work because I don't know the formal name to pass to 'use_series'.
|
Formal argument names for magrittr aliases
|
You need to use the functional programming technique,partial application. Essentially you can create another anonymous function that captures your arguments and applies them to your real function.var additionalCompressionArgument = 123;
using (FileStream input = File.OpenRead("inputData.bin"))
using (FileStream output = File.OpenWrite("outputData.bin"))
using (StreamPipeline pipeline = new StreamPipeline(
(input, output) => Compress(input, output, additionalCompressionArgument),
Encrypt))
{
pipeline.Run(input, output);
}
static void Compress(Stream input, Stream output, int additionalCompressionParameter){
using (GZipStream compressor = new GZipStream(
output, CompressionMode.Compress, true))
CopyStream(input, compressor);
}
|
I read a great article on a .Net StreamedPipeline implementation on the MSDN Magazine (https://msdn.microsoft.com/en-us/magazine/cc163290.aspx).I have a challenge though.In tthe implementation, they had it call theCompressandEncryptmethods:using (FileStream input = File.OpenRead("inputData.bin"))
using (FileStream output = File.OpenWrite("outputData.bin"))
using (StreamPipeline pipeline = new StreamPipeline(Compress, Encrypt))
{
pipeline.Run(input, output);
}These methods were pre-definedwithout any parametersother than theStreamparameters:static void Compress(Stream input, Stream output){
using (GZipStream compressor = new GZipStream(
output, CompressionMode.Compress, true))
CopyStream(input, compressor);
}
static void Encrypt(Stream input, Stream output) {
RijndaelManaged rijndael = new RijndaelManaged();
... // setup crypto keys
using (ICryptoTransform transform = rijndael.CreateEncryptor())
using (CryptoStream encryptor = new CryptoStream(
output, transform, CryptoStreamMode.Write))
CopyStream(input, encryptor);
}What I am really struggling to do is how to get other,additional, non-Stream parameters to be sent to the pipeline. e.g, if it's Encryption, I want to send the keys and have that included in the pipeline calls.How do I implement the same functionality with additional parameters?
|
How to implement passing of parameter to a StreamedPipeline that uses Action<Stream,Stream>
|
AFAIK, there is no standard feature in Python to do that.There is module calledPipewhich overrides operator|, but it is reasonable considered harmful (overriding operator while changing its semantics).However, you may implement simpleFluent interfacefor that. Here is an example:class P:
def __init__(self, initial):
self.result = initial
def __call__(self, func, *args, **kwargs):
self.result = func(self.result, *args, **kwargs)
return self
def map(self, func):
self.result = map(func, self.result)
return self
def filter(self, func):
self.result = filter(func, self.result)
return self
def get(self):
return self.result
x = P(2)(lambda x: x + 5)(lambda x: x * x)(lambda x: str(x)).get()
z = P(range(10)).map(lambda x: x * 3).filter(lambda x: x % 2 == 0).get()
print(x, list(z)) # prints 49 [0, 6, 12, 18, 24]
|
How canContinuation-passing stylebe facilitated from Python?(I think that's the right term)My code is starting to get messy, I havemap,filterand chains oflambdas like so:(lambda a,b: (lambda c:(lambda d: d*d)(c-b))(a*b))(5,6)"Pipeline expressions" are found in a variety of languages, for example:F#solution (e.g.:|>)let complexFunction =
2 (* 2 *)
|> ( fun x -> x + 5) (* 2 + 5 = 7 *)
|> ( fun x -> x * x) (* 7 * 7 = 49 *)
|> ( fun x -> x.ToString() ) (* 49.ToString = "49" *)Haskellsolution (e.g.:do,pipes)main = do
hSetBuffering stdout NoBuffering
str <- runEffect $
("End of input!" <$ P.stdinLn) >-> ("Broken pipe!" <$ P.stdoutLn)
hPutStrLn stderr strJavaScript(e.g.:async.js):async.waterfall([
function(callback) {
callback(null, 'one', 'two');
},
function(arg1, arg2, callback) {
// arg1 now equals 'one' and arg2 now equals 'two'
callback(null, 'three');
},
function(arg1, callback) {
// arg1 now equals 'three'
callback(null, 'done');
}
], function (err, result) {
// result now equals 'done'
});However I understand that this last strategy is more for asynchronous function response resolution (see:callback hell).How do I present all stages of my processing in a single Python expression/line?
|
Pipelines in Python? - Think async.js / Haskell's `do` / F#'s `|>`
|
You can use this one.let map funcs vals = funcs |> List.map (Array.map >> ((|>) vals))The partArray.map >> ((|>) vals)partially appliesftoArray.mapand then composes it with the application ofvals.
|
I'm new to F# and have a question about functions pipeline. Let's say we have a functionmapwhich maps list of functions to array of values creating a list of arrays://val map : ('a -> 'b) list -> 'a [] -> 'b [] list
let map funcs vals =
funcs |> List.map (fun f -> Array.map f vals)Usage example://val it : float [] list = [[|1.0; 1.144729886|]; [|15.15426224; 23.14069263|]]
map [log; exp] [|Math.E; Math.PI|]Is there a way to replace lambda function(fun f -> Array.map f vals)with a chain of pipeline operators?I'd like to write smth like://val map : 'a [] list -> ('a -> 'b) -> 'b [] list
let map funcs vals = funcs |> List.map (vals |> Array.map)But this doesn't work.Many thanks,Ivan
|
F# Changing parameters precedence
|
So, to conclude:Make sure you do not have any harcoded<script>element in your .gsp pointing to jquery.If you upgraded from Grails 2.3, make sure you remove all lingering<g:javascript library='jquery'/>et<r:layoutResources/>statements.Make sure you have an<asset:javascript src="application.js"/>statement in your layout .gspMake sure you do not have the jquery lib inweb-app/jsIf you do not bundle your resources, you should get the following entries in the HTML source view:<script src="/APPNAME/assets/jquery/jquery-1.11.1.js?compile=false" type="text/javascript" ></script>
<script src="/APPNAME/assets/jquery.js?compile=false" type="text/javascript" ></script>
|
I have the following config in BuildConfig file.// plugins for the compile step
compile ":scaffolding:2.1.0"
compile ':cache:1.1.7'
compile ":asset-pipeline:1.9.7"
runtime ":jquery:1.11.1"
compile ":jquery-ui:1.10.4"
runtime ':twitter-bootstrap:3.3.2'No resources pluginand application.js//= require jquery
//= require_tree .
//= require_self
//= require bootstrapand application.css*= require main
*= require mobile
*= require_self
*= require bootstrapstill, when i load the page i dont see the right path for jquery and hence jquery is not loaded. what am i missing.when i did a view source this is what i see<script src="/appName/js/jquery/jquery-1.11.1.js" type="text/javascript" library="jquery"></script>
|
grails 2.4 jquery path
|
If you only want the user name of a process you can use -o option:PUID=$(ps -p $PID -o uname=);
echo $PUID
|
I'm trying to finish up a maintenance script, and I'm getting caught up with the following line:PUID=$(ps aux | grep $PID | grep -v $USER| cut -d' ' -f1)I'm trying to pull a specific process ID ($PID) out of theps auxcommand, (while ignoring the process the user just created with the same PID), then eliminate all but the user name of the process owner.Currently, the command runs fine when run in the command line, however I've been having issues assigning it to the variable$PUID, or even just executing it as a command. Any advice?EDIT:I'm trying to get this figured out and I believe there is still a problem in passing the variable $PID, right now it's pulling from a file (which it does properly) using this linePID=$(cat /nfs/pdx/home/komcconx/PID/current/pid)and if I add an echo $PID it returns the proper pid.when I run the command PUID=$(ps -p $PID -o uname=), I get an error saying "ERROR: Process ID list syntax error." and if I runn the command with a "1" in the place of $PID it properly returns "root"any ideas?'Final edit:Found out the issue was the PID was being pulled from a DOS file, and I was trying to run the command with a DOS newline, this issue is closed!
|
Using variables and multiple pips in bash?
|
I'm not sure I fully understand what you want to do. But if you have the code for the plugin you can add a property to the elements that need this (void *) and set that property with the value you want.If you need to have the same object/pointer shared across the whole pipeline I'd recommend taking a look at GstContext:https://developer.gnome.org/gstreamer/stable/gstreamer-GstContext.htmlIt might be what you need.
|
Gstreamer has an internal logging function:gstinfoHowever, we have a custom logger object which should be shared by pipeline and has some specific functionality (SNMP) needed in the application context. The logger has an appropriate API needed by all internal elements of the plugins. (BTW: plugins in the context here are also built on our own). It has a built-in thread-safety elements as needed.My question is, how can you pass the pointer to the object created by a pipeline object to inside of all plugin instances objects? Unless we are able to pass an object inside, there will be no way internals of the object will be able to access.How does one pass on a (void *) object inside the plugins?
|
Gstreamer: How do you push external object inside the pipeline?
|
Using a pipe is correct:python program-1.py sample.txt | python program-2.pyHere's a complete example:$ cat sample.txt
hello
$ cat program-1.py
import sys
print open(sys.argv[1]).read()
$ cat program-2.py
import sys
print("program-2.py on stdin got: " + sys.stdin.read())
$ python program-1.py sample.txt
hello
$ python program-1.py sample.txt | python program-2.py
program-2.py on stdin got: hello(PS: you could have included a complete test case in your question. That way, people could say what you did wrong instead of writing their own)
|
I checked outReading stdout from one program in another programbut did not find the answer I was looking forI'm new to Linux and i'm using the argparse module in Python to run arguments with a program through the terminal on my MacI have program_1.py that inputs a file via sys.stdin and outputs data to sys.stdoutI'm trying to get program_2.py to take in this data that was outputted to sys.stdout from program_1.py and take it in as it's sys.stdinI tried something along the lines of:Mu$ python program-1.py <sample.txt> program-2.pyFor simplicity, let's say that 'sample.txt' just had the string '1.6180339887'How can program_2.py read from sys.stdout of the previous program as it's sys.stdin?In this simple example, I am just trying to get program_2.py to output '1.6180339887' to sys.stdout so I can see how this works.Somebody told me to use the | character for pipelines but I couldn't make it work correctly
|
Linux : stdout of one program to stdin of another program
|
Your analysis is indeed correct, but I guess your professor is looking for an explanation like this:Suppose the single cycle processor also has the stages that you have mentioned, namely IF, ID, EX, MA and WB and that the instruction spends roughly the same time in each stage as compared to the pipelined processor version. Now you can draw a pipeline diagram for this single cycle processor, and see that it would take 50 cycles on a single cycle processor (which can work on 1 instruction at a time) compared to the 19 cycles on a pipelined processor.Again, I prefer the way you have analyzed it (as the single cycle processor wouldn't really have each of those stages in a different clock cycle, it would just have a very long clock cycle to cover all the stages). Also, you've not mentioned whether this is a stalling-only MIPS pipeline (for which your answer is correct) or if this is a bypassed-MIPS pipeline. If this is the latter, you can shave off a few more cycles and get it down to 15 cycles.
|
I have to compare the speed of execution of the following code (see picture) using DLX-pipeline and single-cycle processor.Given:an instruction in the single-cycle model takes 800 psa stage in the pipeline model takes 200 ps (based on MA)My approach was as follows.CPU time = CPI * CC * ICSingle-cycle:CPU time = 1 * 800 ps * 10 instr. = 8000 ps.Pipeline:CPI = 21 cycles / 10 instr. = 2.1 cycles per instructionCPU time = 2.1 * 200 ps * 10 = 4200 ps.CPU time single-cycle / CPU time pipeline = 8000/4200 = 1.9, sothe pipeline code runs 1.9 faster.But I was said, I have to work with clock cycles and not with the time -- "It doesn't matter how much time a CC takes".I don't see how to make a comparison otherwise. Could you please help me?
|
Pipeline processor vs. Single-cycle processor
|
I found the solution for my problem.I simply had the wrong signature for my functionstatic GstFlowReturn new_sample (GstElement *appsink, AllElements *element)and I now usegst_base_sink_get_last_sample(GST_BASE_SINK(appsink));to get the sample.
|
I am writing a simple application usinggstreamer-1.0and I want to receive the buffers that have flowed through the pipeline back into my application. To do so, I use theappsinkplugin at the end of the pipeline.Until now, everything is working, but when I want to recieve the buffers, I get these errors(app:31759): GLib-GObject-CRITICAL **: g_signal_emit_by_name: assertion 'G_TYPE_CHECK_INSTANCE (instance)' failedand(app:31759): GLib-GObject-WARNING **: instance with invalid (NULL) class pointerHere is the code I have wrotetypedef struct _AllElements
{
GstElement *pipeline;
...
GstElement *appsink;
} AllElements;
static void new_sample (AllElements *element)
{
GstSample *sample = NULL;
/* Retrieve the buffer */
g_signal_emit_by_name (element->appsink, "pull-sample", &sample,NULL);
if (sample)
{
g_print ("*");
gst_sample_unref (sample);
}
}
int main(int argc, char *argv[])
{
Allelemets element;
... // making and linking all the elements
g_object_set (G_OBJECT (element.appsink), "sync", TRUE, NULL);
g_object_set (element.appsink, "emit-signals", TRUE, NULL);
g_signal_connect (element.appsink, "new-sample", G_CALLBACK (new_sample), &element);
...
gst_element_set_state (element.pipeline, GST_STATE_PLAYING);
...
return 0;
}Does anyone can help me fix this ? Thanks to everyone !
|
pull-sample signal using appsink
|
Exactly how the Flat File Disassembler probes the messages is not well documented. However, it usually doesn't matter because if it doesn't work, well, it just doesn't work in you case.What you can do is wrap the Flat File Disassembler and implement you own, more robust, detection logic.Here's an example:http://biztalkxin.blogspot.com/2012/11/biztalk-2010-create-dynamic-flat-file.html
|
In my BizTalk Project, I need to have a Receive Pipeline that will disassemble four different flat files that each have a unique schema. That's to say, the pipeline must resolve the schema of the flat file sent through as 1 of 4 flat file schemas dynamically at runtime.The best approach I have heard to do this is to just have 4 Flat File Disassemble shapes in the Disassemble stage of my pipeline. The logic behind this is that BizTalk will run through the disassemble shapes one by one until it matches the schema of the document to one of the schemas designated in the disassembler components - sort of like an if statement on the schema type. However, no matter which of the 4 documents I pass through, BizTalk seems to always want to go with the very first schema in line in the pipeline disassemble shapes.So my question(s): Can someone explain in more detail exactly what happens when more than one flat file disassemble shape gets added to a pipeline? Is there a better alternative than to take this approach?
|
Biztalk Pipeline That Will Disassemble Multiple Flat File Schemas
|
stdout is normally buffered. You want line-buffered. Trystdbuf -oL sh long.sh | sh simple.shNote that this loopfor line in `cat LONGIFLE.dat`; do # see where I put the semi-colon?readswordsfrom the file. If you only have one word per line, you're OK. Otherwise, to read by lines, usewhile IFS= read -r line; do ...; done < LONGFILE.datAlways quote your variables (echo "$line") unless you know specifically when not to.
|
I have two scripts, let's say long.sh and simple.sh: one is very time consuming, the other is very simple. The output of the first script should be used as input of the second one.As an example, the "long.sh" could be like this:#!/bin/sh
for line in `cat LONGIFLE.dat` do;
# read line;
# do some complicated processing (time consuming);
echo $line
done;And the simple one is:#!/bin/sh
while read a; do
# simple processing;
echo $a + "other stuff"
done;I want to pipeline the two scripts this:sh long.sh | sh simple.shUsing pipelines, the simple.sh has to wait the end of the long script before it could start.I would like to know if in the bash shell it is possible to see the output of simple.sh per current line, so that I can see at runtime what line is being processed at this moment.I would prefer not to merge the two scripts together, nor to call the simple.sh inside long.sh.
Thank you very much.
|
Use I/O redirection between two scripts without waiting for the first to finish
|
The issue is that each pipeline expression is a closure.Where-Objectis only going to send the Item that matched down the pipeline, not the context.The simplest to understand method is to do:Get-Collection | ForEach-Object { if($_.Name -match $regex) { [int] $Matches[1] } }
|
This piece of code actually works, and I'm curious whether I got lucky(!?) and found a bug in the Power Shell language (this is pseudo code-ish, but it illustrates my question):$regex = "prefix([0-9]+)"
$collection = Get-Collection | Where-Object {$_.Name -match $regex} `
| ForEach-Object { [int] $Matches[1] }Input is basically objects whose propertyNamemay be on the format "prefix[Integer]". If that is the case, I want to extract that integer and insert it into a new sequence.This seemingly works, but it feels like I'm exploiting an implementation detail in the Power Shell language.How would you solve this problem?
|
Powershell pipes and regular expressions, order of evaluation
|
Of course it can !Here's an example launch line to get you going :gst-launch-1.0 uridecodebin uri=file:///home/meh/Music/Fonky\ Family-\ L\'amour\ Du\ Risque-MGvSx-foo3E.wav ! adder name = m ! autoaudiosink uridecodebin uri=file:///home/meh/Music/kendrick.wav ! audioconvert ! m.Have a nice day :)
|
I want to create a pipeline in gstreamer that will have two audio source and will mix the audios with some scaling factor and through the output data to alsasink. I have seen the example of "adder" but am not sure if adder can be used with multiple filesrc.
I need your help in constructing this pipeline.
|
gstreamer pipeline to mix two audio source
|
As mentioned in a previous answer, you can use aRecord Filter, which is theNrparameter, to accomplish this. You can read more aboutRecord Filtersin theAdvanced Developer Guide.Since the question is specifically about configuring the pipeline to support this, it is worth pointing out that you have to explicitly enable properties forRecord Filterwhile all dimensions are automatically available forRecord Filter. In most cases 'Brand' is likely to be a dimension, in which case your query will look like this:Nr=NOT(brand:X)It is likely you'll want to take this further to use nested filters which I suggest you look at the examples in the documentation.
|
I need to filter a set of records based on a 'not equal to' (NEQ) condition.
For example if I want get all products where brand is not equal to say "X".
How to configure this situation in pipeline?
|
endeca filter not equal to condition
|
I don't think it is possible to alter the pipeline created by playbin, it has internal code to auto manage this pipeline and manually modifying it will lead to unexpected results. You can update it using the given properties and signals, though.You can usegst_bin_iterate_elementsorgst_bin_iterate_recurseto iterate over elements of the pipeline to print them. It is also possible to usehttp://gstreamer.freedesktop.org/data/doc/gstreamer/head/gstreamer/html/gstreamer-GstInfo.html#GST-DEBUG-BIN-TO-DOT-FILE:CAPSto have a .dot file created. The dot file is a graph representation of the pipeline and can be converted into an image using the dot application. This way you have the full pipeline drawn into an easily understandable image.It is hard to give you further advice as I don't know what you are trying to do by altering playbin2's pipeline, you can try looking at lower level elements, like uridecodebin or decodebin2, and look at the autoplugging signals to control what is automatically added by those elements. IIRC this can be also done from playbin2's level.It seems that you are still using gstreamer 0.10, it is no longer developed. If you have no reasons to stick to 0.10, please move on to 1.0
|
i am creating a player which uses playbin2 to create pipeline.
in my code I'm using the following line to create the pipeline.pipeline = gst_parse_launch("playbin2", &error);so pipeline will get created and the player is working. Now I wish to alter the pipeline created. Is there any api in Gstreamer which helps to view and edit the pipeline created using playbin2 ?Also I wish to print the pipeline created usinggst_parse_launch. How to print the pipeline usingGstElementreturned fromgst_parse_launch?
|
How to get Pipeline created by playbin in Gstreamer?
|
No for HttpClient based implementations as they do not implement it:https://issues.apache.org/jira/browse/HTTPASYNC-8No as per Java impl.You can open an enhacement request at Bugzilla
|
I was using JMeter for several years.
But recently I need to send HTTP1.1 request with pipelining from JMeter to my server.
We know, pipeline feature allows client(or web browser) send more than one http requests(GET) in once sending.
I went through the help doc of JMeter and couldn't find any clue.
I know the business test instrument like Spirent has this kind of feature.
But I have only JMeter.
any idea? thanks.
|
Can JMeter support to send HTTP1.1's pipeline requests?
|
I'm an idiot. It's not a bug, but it's interesting. BecauseSystem.Messaging.MessageQueueimplementsSystem.Collections.IEnumerableby enumerating it's messages, the behaviour I was seeing was that PowerShell was actually reading the messages off the newly-created queues and putting them into the pipeline rather than the queue objects themselves. Of course because the queues were new, they were empty, so there was nothing passed on down the pipeline.I just spent most of this afternoon and some hours this evening working this out. I am not proud of myself.
|
I've written the following variations of a function to return aSystem.Messaging.MessageQueueobject:set-strictmode -version latest
add-type -AssemblyName System.Messaging
$VerbosePreference = 'Continue'
$DebugPreference = 'Continue'
function Get-MsmqQueue1 {
New-Object "Messaging.MessageQueue" -Args '.\private$\barneytest'
}
function Get-MsmqQueue2 {
$q = New-Object "Messaging.MessageQueue" -Args '.\private$\barneytest'
$q
}
function Get-MsmqQueue3 {
$q = New-Object "Messaging.MessageQueue" -Args '.\private$\barneytest'
Write-Output $q
}
function Get-MsmqQueue3a {
$q = New-Object "Messaging.MessageQueue" -Args '.\private$\barneytest'
if ($q) {
Write-Debug "Successfully created $($q.QueueName)"
} else {
Write-Error "No queue object created"
}
Write-Output $q
}
$q = Get-MsmqQueue3a
$q
if ($q) {
Write-Debug $q.QueueName
} else {
Write-Error "No queue object returned"
}None of them return an object. It's somehow being swallowed up by PowerShell. Note that the "3a" version has logging to prove that the value it's writing to the pipeline isnot null, yetno value is returned from the function.How can this be? Is this a PowerShell bug?Many thanks in advance.
|
Why can't a PowerShell function return a MessageQueue object?
|
A simple solution:cat listofpaths.txt | xargs cat >> combineddata.txt
|
I've got a list of filepaths, and would like to get the data from each of these files into a single communal file. The list of filepaths is in a file one-per-line, so I was hoping I could do something like:cat listofpaths.txt >> combineddata.txtObviously that won't work since that's just going to get me the paths rather than the stuff within them - hmm...I could do this usingPython, but I'm assuming there a 1-liner to do it using aUNIXpipeline - anyone know what it'd be?
|
Piping Contents of Files from Filepaths to Single File - UNIX
|
No, not really. The double pipe operator falls through to the right-hand side whenever there's afalseyvalue on the left-hand side, so one ofundefined,null,NaN,0or"".If, in all those cases, you wantcartto be"": go for it. Use||.
|
I have the following line in my code:var cart = $("#dynamo_shop_window .dynamo_content tbody .shop_cart").html();However, I want the value ofcartto be an empty string if there are no matching elements on the page, i.e:!$("#dynamo_shop_window .dynamo_content tbody .shop_cart").size();If this is the case,var cart = null;, well according to Chrome's developer tools anyway.To give it the empty string value, is there any reason why I should usecart = cart !== null ? cart : '';after the above code instead of replacing the above code with:var cart = $("#dynamo_shop_window .dynamo_content tbody .shop_cart").html() || '';The.html()will never return0or any otherfalserelated statements.
|
When can you use the pipeline operator for conditional arguments? - JavaScript
|
A crude version:$line = $null
gc test.txt | %{
if($line){
if($_ -match "FAILED"){
$line
$line = $null
}
}
if($_ -match "^Checking"){
$line = $_
}
}
|
I want to perform some log analyses. Example log file listed below. I know how to resolve this issue with procedure/script, but I wonder how it can be done in 'powershell style' with pipelines etcLog file contains from string with filename and list of components. Each of them can be 'passed' or 'failed'. I want to print all lines starting with 'Checking' which has at least one FAILED component.Checking : C:\TFS\Datavarehus\Main\ETL\SSIS\DVH Project\APL_STG1_Daglig_Master.dtsx
Checking : C:\TFS\Datavarehus\Main\ETL\SSIS\DVH Project\COPY_STG1_APL_Dekningsgrupper_Z1.dtsx
SQLCommand [Z1 APL_Dekningsgrupper] FAILED the VA1 check (table name as source)
SQLCommand [APL_Dekningsgrupper] passed the VA1 check
SQLCommand [FAK_Avkastningsgaranti_Kollektiv] FAILED the VA1 check (table name as source)
Checking : C:\TFS\Datavarehus\Main\ETL\SSIS\DVH Project\DM_DVH_BpBedrift_InvesterteFond 1.dtsx
SQLCommand [DVH] passed the VA1 check
SQLCommand [Avtale Kurs] passed the VA1 checkAt the glance looks like I need something similar togc logtxt | select-string -pattern 'FAILED' -context <nearest line starting with "Checking" in begining direction>,<lines count from CheckingLine to "LineWith 'FAILED'">
|
select-string with dynamical context parameter value
|
A very late answer, but hopefully someone still finds it useful:To be able to use Microsoft.Xna.Framework.Content.Pipeline, you need to set the target framework of your project to.NET Framework 4, not.NET Framework 4 Client Profileas seems to be the default. You can do this under the properties of your project.
|
I'm having trouble find the image file I have loaded into Visual C# 2010 Express when I compile my code.Based on the answer here:Missing option for "Content Importer" in XNA (trying to import video)I'm trying to add a reference to Microsoft.Xna.Framework.Content.Pipeline, but it's not an option available in the Add Reference section.As such, the Content Importer and Content Processor fields mentioned here are not showing up:xna 4.0 and loading images failsThe code crashes whenever I try to load a .jpg, .tga, or .png as a texture.Where can I find the reference, or is there some way to get around it?
|
Unable to add Microsoft.Xna.Framework.Content.Pipeline in Visual C# 2010 Express, Windows 7?
|
They are all related to the I-type MIPS branch instruction, which compares the value of one or 2 registers and branches if they are/aren't equal.The MIPS PC is 32 bit long, but the branch instruction has only a 16 bit relative address. Those 2 need to be added together to calculate the new PC value in case of a branch. For this the 16 bit address is expanded to 32 bit ( sign extend + shift to the left 2 positions ).This is thesign-extended offsetwhich is then added to the current PC to get the target address ( thebranch address).The branch condition is checked by the ALU unit and it will assert thezero signalif needed.
This zero signal is then gated by a branch signal from the control unit and those 2 control the mux that selects the new value that will be written into the PC.If the zero signal is one and the current instruction is a branch instruction then the PC will be loaded with the calculated branch address, else PC +4.
|
I'm studying a "Pipeline Datapaths" lesson and I have found these three terms "sign-extended offset,branch address,Zero signal" regarding to pipe line registers ID/EX and ID/MEM but I have no any idea about those three. Can any one simply explain those three terms. It is difficult to get simple idea from the web because I'm just a beginner.Thanks!
|
Pipeline Datapaths
|
([0-9]\{1,2\})_*matches things like(11)or(1)followed byzero or moreunderscores.*_([0-9]\{1,2\})matches a*followed by_followed by something like(11). What you mean is.*_([0-9]\{1,2\})Note the.; regular expressions aren't glob patterns.
|
just find gives me:.
./bla-bla_(11)
./bla-bla_(1)
./rename
./rename~Thisfind . | grep "*_([0-9]\{1,2\})"gives me empty result.and thisfind . | grep "([0-9]\{1,2\})_*"gives me./bla-bla_(11)
./bla-bla_(1)But as you can see underscore and other chars appears before braced digites. Why it works in second case? but not in first where i placed them in right order.
|
grep and find in pipeline. Strange reverse
|
Assigning from ayieldis standard as of Python 2.5. It enables coroutines.Seehttp://docs.python.org/whatsnew/2.5.html#pep-342-new-generator-features
|
From the following page (http://code.google.com/p/appengine-pipeline/wiki/GettingStarted) I have seen the following code in an example of how to use the AppEngine pipeline:class AddOne(pipeline.Pipeline):
def run(self, number):
return number + 1
class AddTwoAndLog(pipeline.Pipeline):
def run(self, number):
result = yield AddOne(number)
final_result = yield AddOne(result)
yield LogMessage('The value is: %d', final_result) # WorksMy question/confusion is about the yield statement on the right side of the "=". Is this standard python syntax/usage, or is this special case that is only allowed/used with the Pipeline model? What is happening here?
|
AppEngine Pipeline Yield - is this standard usage of the yield operator?
|
You can dolet applyFirst f elems = elems |> Seq.tryPick (f >> Some)But I think I preferlet applyFirst f elems =
if Seq.isEmpty elems then
None
else
Some( f(Seq.head elems) )as more readable.
|
If I specify function value as:let applyFirst f elements =
if Seq.isEmpty elements then None else elements |> Seq.head |> fthen F# infers theftype asf: 'a -> b' option. It's ok, I understand why F# infersf's return type as'b option. But I wantfto bef: 'a -> 'b, and it can be done by changingapplyFirstfunction:let applyFirst f elements =
if Seq.isEmpty elements then None else elements |> Seq.head |> f |> SomeBut wonder if there's some more elegant way to do this?
|
Collapse option type creation
|
You should create a custom application.It's not clear what you do with the stream coming from either camera; let's assume for now you're just displaying it.Create a bin with a source element for the camera, and a decodebin element for decoding.When you want to switch, pause the pipeline, renmove source and decodebin, and add two new ones (with the new ip) and set them to paused.Then set the whole pipeline to playing.If the camera's are of the same type, you may get away with reusing the one source element (going to NULL or READY first), but it's more than likely you should throw away and recreate the decoder.
|
I have two ip addresses linked to two cameras. I can stream one ip address. I need to switch from one camera to the other so my source in the pipeline should change from one ip address to another. Is there a way to accomplish that using a gstreamer plugin? Or by command line? Is there an application that can do this? Should I create a custom application?
|
Realtime changing the ip source into a gstreamer pipeline
|
I can't find the comments link, so post an answer.
As Eugen Constantin Dinca said, pipe or redirect just output to the standard input, so what your program need to do is read from standard input.I don't know what "read line by line" mean as you mentioned, something like ftp interactive mode? If that, there should be a loop in your program which read a line once a time and wait for the next input until you give the terminal signal.Edit:int c;
while(-1 != (c = getchar()))
putchar(c);
|
I need to write a program that works with input from either a file or the shell (for pipeline processing). What is the most efficient way to deal with this? I essentially need to read the input line by line, but the input might be the output of another program from shell, or a file.Thanks
|
pipeline input data
|
You can use Select-Object (or select in short)Get-GPO -All | Select DisplayName
|
Lets say I run a powershell command (in my case Group Policy related) but lets say I run this command:PS C:>Get-GPO -Alland my output looks like:DisplayName : My Named GPODomainName : mydomain.comOwner : Domain AdminsId : Random_GUID
...How can I "filter" that command so that it only returns the lines relating to DisplayName? Is that possible or will I need to do some string parsing that's not available inside a pipeline operation? Because ultimately, I'm looking to use that DisplayName param to pipe to another command.Thanks in advance!
|
Powershell Newbie: How do I filter results so that only the information I get back can be used in a pipeline?
|
There is no way to force pipelining using HttpWebRequest. However, if the server is 1.1 compliant,and your request method is Idempotent,you can get a high probability of pipelining being used if you use async and issue multiple requests to the same server at a time. You can also use synchronous pattern with multiple threads. The key is to issue more than one request at a time.
|
I would like to send multiple HTTP requests to a server, using pipelining where possible, and otherwise using multiple TCP connections. However, HttpWebRequest seems to automatically use multiple connections if ServicePointManager.DefaultConnectionLimit is bigger than 1. I can only get it to pipeline if I set this to 1. Is there an alternative way to force pipelining?
|
Is there a way to force pipelining in HttpWebRequest without setting ServicePointManager.DefaultConnectionLimit?
|
The best method I've found is to use a lua script to construct the response. The example below is not the fastest since we can useEVALSHAto reduce our request size.Code:let raw_kv: Vec<Option<(String, Vec<u8>)>> = tokio::time::timeout(
timeout,
redis::cmd("EVAL")
.arg(
// use KEYS instead of SCAN since EVAL is already blocking
"
local keys = redis.call('KEYS', ARGV[1]);
if next(keys) == nil then return keys end;
local out = {};
for i=1,#keys do out[i] = {keys[i], redis.call('GET', keys[i])} end
return out;
",
)
.arg(0) // no manually passed in keys
.arg(pattern) // custom pattern to search for
.query_async(&mut self.conn),
)
.await;Docs:https://docs.rs/redis/latest/redis/struct.Cmd.html#method.arg
|
I'm trying to get all the key value pairs for a given prefix from Redis, without making many network requests using Rust. The answers so far suggest using bash or making multiple requests.
|
How to query all key-value pairs for a pattern from Redis without making multiple requests
|
xcodebuild test -project Project.xcodeproj -scheme Scheme -destination 'platform=iOS Simulator,name=iPhone 14' | xcpretty -sUsing this comment build is completed successfully. Just removed the OS.Ref:Running tests for Swift package using xcodebuild: Error tests must be run on a concrete deviceThis answer notes point help me.
|
I am trying to build in terminal and git-lab pipeline, using:xcodebuild test -project Project.xcodeproj -scheme Scheme -destination 'platform=iOS Simulator,name=iPhone 14,OS=13.0' | xcpretty -sBut facing issues like:xcodebuild: error: Failed to build project Project with scheme Project.: Cannot test target “Tests” on “Any iOS Device”: Tests must be run on a concrete device
Cannot test target “UITests” on “Any iOS Device”: Tests must be run on a concrete device.I am using these:Minimum Deployment: 13.0xcodebuild clean -project Project.xcodeproj -scheme Scheme | xcprettyxcodebuild test -project Project.xcodeproj -scheme Scheme -destination 'platform=iOS Simulator,name=iPhone 14,OS=13.0' | xcpretty -sThanks in advance!
|
Xcode Build: Tests must be run on a concrete device
|
I found the solution with help from the post:Updating deployment manifest for a ClickOnce application programmatically results in missing <compatibleFrameworks> element, required in 4.0To summery:Build with t:publishRemove all the *.deploy extensionsUpdate the application manifestAdd back all *.deploy extensionsUpdate the deployment manifestUpload to Azure containerNow the application can be installed with only a click at a link to the deployment manifest!
|
I have a c# winform project, which is located in an Azure repository.My goal is to transform it into a ClickOnce application in an Azure pipeline using a build server.
There is no enabled Signing or Publish in the *.csproj.Everything looks good, but when I try to install the application via a link to the deployment application, the following error occurs:"Reference in the deployment does not match the identity defined in the application manifest."Both manifests have same top public keys. The public key for the in the deployment manifest is set to zero.
I'm not sure if this is the root cause and if so, how to fix it?The job in Azure pipeline looks like this:Fetching certificateBuilding via MSBuild@1 with option /target:publishSigning via mage.exe deployment manifest (application file), application manifest (exe.manifest file) and executable application (exe.deploy file)Uploading the ClickOnce files to an Azure containerAny idea?
|
Possible to make a ClickOnce application in Azure pipeline from a c# winform project
|
You can implement your own custom dataset for specific format. I am not familiar with LLM format but I don’t think there is a universal format for binary?For your second question you may use the APIDataset to fetch from some endpoint. There is a HuggingfaceDataset that you may take as inspiration.
|
I'm currently working on a datascience project using LLMs (Large language models). Weights for models usually come in different formats, most frequently .bin or .gguf, and I'd like to keep it that way.However the only way to store binary files I know is to use type: pickle.PickleDataset like sotest_model: # simple example without compression
type: pickle.PickleDataSet
filepath: data/07_model_output/test_model.pkl
backend: pickleI'm not okay with that as I want my model files to be language agnostic.What would be a correct way to specify arbitrary binary file in catalog.yml?(additional question: and what if I want to fetch it from certain url or by running some kind of script which fetches it from the net? Should I create a separate pipeline for that?)
|
KEDRO - How to specify an arbitrary binary file in catalog.yml?
|
It works likeORand it is impossible to configure this to behave likeAND. For this, you require state management and this is not provided by Azure DevOps. You could try to build it on your own with some external service and then when you get A and B triggered what satisfies your condition trigger pipeline C via REST API.
|
I have two different pipelines A and B and need to trigger pipeline C on completion of both pipeline A AND pipeline B. A and B runs on different times and will need to trigger C upon completion of both A and B, and not run if either A or B fails.Would adding both pipelines, like in the example below code, create an "OR" logic or an "AND" logic to the pipeline?resources:
pipelines:
- pipeline: A
source: A.source
trigger:
branches:
- master
- pipeline: B
source: B.Source
trigger:
branches:
- masterI only have experience adding a single pipeline dependency but requirements have changed and wanted to check before posting the above logic.
|
Trigger pipeline on completion of all dependent pipelines yaml
|
Accessenvvariables from Azure Static Web App ·DiscussionVariables.
VITE ENV variables in the production of an Azure static web app.Used this reference for Building forProductionconst value = import.meta.env.VITE_VALUETOGET
console.log('process.env', process.env);Refer this Env Variables andModes{
"NODE_ENV": "production",
"PUBLIC_URL": "",
"FAST_REFRESH": true
}Use Environment Variables in Vite.Thestartup commandcontainsnpx serve -l 8080 .For other details refer this How to loadenvironmentvariables from .env file usingVite.
|
locally i use a .env file and do the following:const value = import.meta.env.VITE_VALUETOGET
console.log("value", value )When i go to production however i use the azure portal configuration app settings and added VITE_VALUETOGET with the value, this doesn't work because i get undefined if i try to log it.I also tried to use pipeline to deploy the app and export the values from azure vault before i deploy it but nothing seems to work, process.env neither.I also tried vite.config.js:export default defineConfig({
mode: 'production',
plugins: [
react(),
// Add the replace plugin to inject environment variables
replace({
'import.meta.env.VITE_VALUETOGET': JSON.stringify(process.env.VITE_VALUETOGET)
}),
],
});
|
How can i access my react VITE ENV variables in production of an azure static web app?
|
I don't understand why Bind(SafeSplit) returns an EitherData which has a Right property I need to dereference.You don'thave to(in fact, I'd advise against it), but it compiles because the LanguageExt library comes with aBindoverload with this signature:public static IEnumerable<R> Bind<T, R>(
this IEnumerable<T> self, Func<T, IEnumerable<R>> binder)
{
return self.BindFast(binder);
}and sinceEither<L, R>implementsIEnumerable<EitherData<L, R>>the code in the OP compiles.The reason it's not advisable is because theRightproperty ispartialor unsafe. It works as long as theEitherobject is aRightvalue, but the whole point of the type is that you rarely know whether or not that's the case.Here's an alternative expression that may not look much simpler, but at least is safer:var result = Right<Exception, string>("1,2,Foo,4")
.Bind(SafeSplit)
.Map(numStrs => numStrs
.Map(SafeParse)
.Rights()
.Map(num => num * num) // Square
.Iter(WriteLine));Instead of trying toget the value out of the monad, it keeps the outerEithercontainer and performs all actions inside of it.
|
I'm trying to grok how to deal with functions that turn one "Either" into many "Either"s and how you then merge those back into a single stream.The following turns one string into many numbers, squaring each and ignoring any errors. Given "1,2,Foo,4" it writes out 1,4,16.It works but I don't understand why Bind(SafeSplit) returns an EitherData which has a Right property I need to dereference.private static void ValidationPipelineVersion()
{
var result = Right<Exception, string>("1,2,Foo,4")
.Bind(SafeSplit)
.Bind(numStrs => numStrs.Right.Select(SafeParse))
.Rights() // ignore exceptions for now
.Map(num => num * num) // Squared
.Iter(num => WriteLine(num));
}
private static Either<Exception,string[]> SafeSplit(string str)
{
try
{
return str.Split(",");
}
catch (Exception e)
{
return e;
}
}
private static Either<Exception,int> SafeParse(string str)
{
try
{
return int.Parse(str);
}
catch (Exception e)
{
return e;
}
}
|
Is there a simpler way to do this pipeline with languageext?
|
I would propose the following solution using static partitions if all values should be materialized.@asset
def get_a():
return a
@asset
def get_b():
return b
@asset(
partitions_def=StaticPartitionsDefinition(["a", "s", "e"])
)
def multiple_num(context: AssetExecutionContext, get_a, get_b):
partition_str = context.asset_partition_key_for_output()
return c
@op
def get_values():
yield RunRequest(
asset_selection=[AssetKey(multiple_num)]
)
# clean-up
@job
def run_values():
get_values()
|
I am new to dagster and having difficult in running the asset for different parameter values when job scheduled.I have created a pipeline using dagster.Trying to materialize the outcome of upstream assetmultiple_num()and using op to pass parameter value to the asset.simplified example is@asset
def get_a():
return a
@asset
def get_b():
return b
@asset
def multiple_num(get_a, get_b):
return c
@op
def get_values():
values = ['a','s','e']
for value in values:
yield RunRequest(
run_key=None,
asset_selection = [AssetKey(multiple_num)]
)
cleanup_directory(value)
def cleanup_directory(): -> str
return status
@job
def run_values():
get_values()Have to call the asset for different parameters values when job scheduled?
|
How to pass parameter to asset from op in dagster
|
You can turn off a pipeline step by setting the estimator toNone:pipe.set_params(step_name=None)If you really need to delete the step altogether, I don't think there's an sklearn-specific way to do that. You can just find the index of the step and thenpopordelit, e.g.idx = [
idx
for idx, name in enumerate(pipe.named_steps.keys())
if name == 'step_name'
][0]
|
How to remove a step from a sklearn pipeline using the step name?By position I know that it can be done:pipeline.steps.pop(n)But with a very large pipeline, it can be difficult to find the position of the step you want to remove.
|
Drop a step from a sklearn pipeline using the step name
|
You are trying to directly access the 'vect' variable but it is not defined outside of the pipeline, use the pipeline objectlrto perform the transformation.news = ["A phase two clinical trial found the shot combined with immunotherapy drug Merck slashed the risk of melanoma returning by 44 percent compared to using the drug alone. Preliminary findings were published in December but had not been reviewed and confirmed by other scientists."]
predicted = lr.predict(news)
for doc, category in zip(news, predicted):
print(category)
|
So I have this pipeline i used for a text classifier that works fine.from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.metrics import accuracy_score
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import Pipeline
lr = Pipeline([('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('clf', LogisticRegression(max_iter = 1000)),
])
lr.fit(X_train,y_train)
y_pred1 = lr.predict(X_test)The thing is, when i try to use the variable name 'vect' in predicting the text, i am told 'vect' is not defined.news = ["A phase two clinical trial found the shot combined with immunotherapy drug Merck slashed the risk of melanoma returning by 44 percent compared to using the drug alone. Preliminary findings were published in December but had not been reviewed and confirmed by other scientists."]
x_new_counts = vect.transform(news)
x_new_tf = tfidf.transform(x_new_counts)
predicted = clf.predict(x_new_tf)
for doc, category in zip(news, predicted):
print(category)
#error message: name 'vect' is not definedHow is that possible when vect is defined in the pipeline?
|
'Vect' not defined sklearn logistic regression error message
|
ForEach-Objectcan help you here, the reason why combining both breaks is because neither of those cmdlets (Add-ContentandCopy-Item) produce output so there is nothing to pipe to.$SourcePath = 'D:\TEST'
$DestPath = 'C:\TEST'
$LogDetailFile = 'C:\Temp\CopyDetail.log'
$Exclude = '!_Archive_!'
Get-ChildItem $SourcePath -Recurse | ForEach-Object {
if($_.FullName -notmatch $Exclude) {
$_ | Copy-Item -Destination { Join-Path $DestPath $_.FullName.Substring($SourcePath.Length) }
$_ | Select-Object FullName # Probably need `-ExpandProperty` here
}
} | Add-Content $LogDetailFile
|
Basically my goal is copy content of a folder to another folder with exclusion one name and also log everything what has been copied.I'm stock on logging Get-ChildItem command combined with Pipeline --> Copy-ItemBellow command will put to log file all data from command Get-ChildItem:$SourcePath = "D:\TEST"
$DestPath = "C:\TEST"
$LogDetailFile = "C:\Temp\CopyDetail.log"
$Exclude = "!_Archive_!"
Get-ChildItem $SourcePath -Recurse | Where {$_.FullName -notmatch $Exclude} |
Select FullName | Add-Content $LogDetailFileWhen I extra add with next Pipeline command to copy them to $DestPath it wont work:Get-ChildItem $SourcePath -Recurse | Where {$_.FullName -notmatch $Exclude} |
Copy-Item -Destination {Join-Path $DestPath $_.FullName.Substring($SourcePath.length)} |
Add-Content $LogFileWhen I done it without logging options, then everything works fine and whole data is copied:Get-ChildItem $SourcePath -Recurse | Where {$_.FullName -notmatch $Exclude} |
Copy-Item -Destination {Join-Path $DestPath $_.FullName.Substring($SourcePath.length)}I tried already switch pipelines between but it don't work.
What I'm missing here? How to copy everything from one Folder to another and log all copied items to logfile?Right now If I want to have 2 things - logging and copy those items I need to run 2 commands, just want to have it in one command.
|
Get-ChildItem with Logging and Copy-Item in one command
|
It refers to the Kubernetes Horizontal Pod Autoscaler.https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/Example:https://learn.microsoft.com/en-us/azure/aks/tutorial-kubernetes-scale?tabs=azure-cli
|
What does themaxReplicasproperty mean in the pipeline yaml in Azure in context of the k8s deployment?E.g. inthisdocumentation themaxReplicas: 10is mentioned. But there is no explanation about what it means. At least I was not able to find one. Would be grateful if someone will help me to find the documentation on that.I have two assumptions.First, it means that we need to duplicate pods. I.e. with themaxReplicas: 10we may have up to 10 clusters with identical pods.Second assumption, themaxReplicas: 10means that in a k8s cluster we can have no more than 10 pods.
|
What does the `maxReplicas` property mean in the pipeline yaml in Azure in context of the k8s deployment?
|
What you are missing is theremainderparameter ofColumnTransformer.By default,ColumnTransformerdrop all unprocessing data:By default, only the specified columns intransformersare transformed and combined in the output, and the non-specified columns are dropped. (default of'drop').So when you don't specifyremainder, you have only the OHE of City column:# Name, Age, City --> ColumnTransformer --> OHE(City)
ct = ColumnTransformer(transformers = [('One-hot', OHE, ['City'])])
ct.fit_transform(data)
# Output
array([[1., 0., 0.],
[0., 1., 0.],
[0., 0., 1.]])Usingremainder='passthrough'give you:# Name, Age, City --> ColumnTransformer --> OHE(City), Name, Age
ct = ColumnTransformer(transformers = [('One-hot', OHE, ['City'])],
remainder='passthrough')
ct.fit_transform(data)
# Output
array([[1.0, 0.0, 0.0, 'Alice', 30],
[0.0, 1.0, 0.0, 'Bob', 40],
[0.0, 0.0, 1.0, 'Charlie', 37]], dtype=object)
|
I am trying to get an idea of the inner workings of a scikit learnPipeline.Consider the below data set and pipeline construction.data = pd.DataFrame({
'Name': ['Alice', 'Bob', 'Charlie'],
'Age' : [30, 40, 37],
'City': ['Amsterdam', 'Berlin', 'Copenhagen']
})
OHE = OneHotEncoder()
ct = ColumnTransformer(transformers = [('One-hot', OHE, ['City'])])
ppln = Pipeline(steps=[('preprocessor', ct),
('estimator', <some_estimator>())
])The next step is then a fit of sorts:model = ppln.fit(data)To the best of my understanding, the above is a series of steps taken that ends in<some_estimator>.fit(???).My question is: what will the???actually be (or how can I determine what it will be).I am unsure about this because I don't exactly know how the'preprocessor'interacts with the data. On its own,ct.transform(data)returns a matrix like object. In this case that would be[[1,0,0],
[0,1,0],
[0,0,1]]I am guessing that something will happen that eventually makes it so that:??? =
[['Alice' , 30, 1, 0, 0],
['Bob' , 40, 0, 1, 0],
['Charlie', 37, 0, 0, 1]]I would like to do better than guessing and know for sure what happens and gain a more complete view of what is happening under water when using aPipeline.
|
What object is a sklearn.pipeline.Pipeline that applies a ColumnTransformer actually fitting on when fit(X, Y) is called on it
|
You need to create project access tokens, if you don't have permissions ask the admin to createTo create a project access token:On the top bar, select Main menu > Projects and find your project.Settings > Access Tokens.Enter a name.Select a role for the token.Select Create project access token.Edit the .gitmodules file from your project. Enter the newly created username and token belowurl = https://$(USERNAME):$(PROJECT_ACCESS_TOKEN)@github.com/user/repository.gitSet theGIT_SUBMODULE_STRATEGY : normalin your CI/CD
|
I am working on CI/CD pipeline (GitLab) build and I want to update my submodule. ( In localgit submodule update --init --recursive --remoteworks well). But the same thing in the pipeline gives the below error:Host key verification failed.
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.I try to usegit submodule update --init --recursive --remotecommand in the scripts section. But it works well in the terminal.
|
Host key verification failed error for submodule in gitlab pipeline
|
Usescriptsection instead ofwithGroovy. Actually saying that you want to use Groovy is quite strange as you already are in Groovy world. Just say "script".You are now in declarative area, but you could consider scripted pipeline if you want to have more freedom by using rather code than declarations:https://www.jenkins.io/doc/book/pipeline/syntax/#scripted-pipeline
|
I want to write a Jenkins pipeline that uses "System Groovy Script". I have validated that the "Pipeline: Groovy" plugin has been installed.When I try something simple like using 'println()', I have success.pipeline
{
agent { label 'remote' }
stages
{
stage('sandbox')
{
steps
{
withGroovy
{
println("Hello");
println("World");
}
}
}
}
}However, when I try to do something more, like define a variable, the interpreter does not seem to recognize what I am doing. As an example, if I add the following line to my code:def content = "Hello World";As follows:pipeline
{
agent { label 'remote' }
stages
{
stage('sandbox')
{
steps
{
withGroovy
{
println("Hello");
def content = "Hello World";
println("World");
}
}
}
}
}I recieve this error:org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
WorkflowScript: 14: Expected a step @ line 14, column 21.
def content = "Hello World";
^
1 errorAm I missing something?
Am I allowed to use standard Groovy code inside a 'withGroovy' step?
|
Jenkins Pipeline: How may I define a variable inside a 'withGroovy' step?
|
The TriggerName and TriggerStartTime columns from the ADFTriggerRun database, as well as the PipelineName and PipelineStartTime columns from the ADFPipelineRun table, can be used to combine ADFPipelineRun with ADFTriggerRun. These columns can be used to make a connection between the two tables because they are shared by both of them.Query to be implementedADFTriggerRun
| join kind=inner
(
ADFPipelineRun
| project PipelineName, PipelineStartTime, RunId
) on TriggerName == PipelineName and TriggerStartTime == PipelineStartTime
| project TriggerName, TriggerStartTime, PipelineName, PipelineStartTime, RunIdInorder to get RunId we need to create ADF instance. After creating ADF go to monitor then the pipeline run.
|
In KQL I can connect ADFPipelinerun with ADFActivityrun by using CorrelationId (join). But now I want to join ADFPipelineRun with ADFTriggerRun but it doesn't work with the CorrelationId. I tried it with the RunId but the RunId is not available in my table from Trigger. So I don't know how to connect it.The TriggerRun consist of values in the table like TriggerId, CorrelationId but not RunId. My ADF for TriggerRun and PipelineRun have both RunId but it is not the same. So I don't know which connection I need to make between those two.And if it is possible between those three tables so I can make assumptions on where and how it went wrong in my log analytics.Thanks!A good answer please :)
|
KQL Pipeline connection with Activity and Trigger
|
I added one user in thereaders groupto access the Azure DevOps project, the user was able to view the boards and the Work items like below but was unable to edit it due to reader role :-Now, I visited theProject Settings with my Admin Account > Boards > Team configuration > Areas > Security > View work items in this node > DenyNow, When I route back to my user who's added to Readers group > The Work Items in Boards are not visible like below:-To Deny access to Pipelines, Refer the following setting below:-Go to Pipelines > three dots > Manage security> Deny > View Build pipeline and View builds like below:-The Pipeline folder is not visible to the user like below:-References:-permissions - Azure DevOps deny access to Pipelines - Stack Overflowby Shamrai AleksanderHow to restrict work item visibility/access to the team only in Azure DevOps | by Zaniar | MediumBy zaniar
|
Is it possible to hide:board names in dropdowns in the Boards menufolder name is the Pipelines menuthe Project settings page
from users of a given user group in Azure DevOps?Boards menu dropdownPipelines menu foldersI can hide the pipelines and work items but not the pipeline folders and board names in the dropdowns.
|
Hide folder names, board names and the project settings page in Azure DevOps from users in a group
|
Make sure to add the application settings to allow zip deploy like below:-- task: AzureCLI@2
inputs:
# TODO: Fill in with the name of your Azure service connection
azureSubscription: ''
scriptType: 'bash'
scriptLocation: 'inlineScript'
inlineScript: |
az functionapp config appsettings set --name $(LAname) --resource-group $(resourceGroupName) --settings "BLOB_CONNECTION_RUNTIMEURL=$(blobendpointurl)"
az functionapp config appsettings set --name $(LAname) --resource-group $(resourceGroupName) --settings "WORKFLOWS_RESOURCE_GROUP_NAME=$(resourceGroupName)"
az functionapp config appsettings set --name $(LAname) --resource-group $(resourceGroupName) --settings WEBSITE_RUN_FROM_PACKAGE=1
addSpnToEnvironment: true
useGlobalConfig: truePowershell task with app service plan- task: AzurePowerShell@5
inputs:
azureSubscription: 'MyAzureSubscription'
ScriptType: 'InlineScript'
Inline: |
Set-AzWebApp -Name MyWebApp -ResourceGroupName MyResourceGroup -AppSettings @{'WEBSITE_RUN_FROM_PACKAGE'='1'}
azurePowerShellVersion: 'LatestVersion'Make sure you change the deployment-method in your YAML pipeline to deploymentMethod: ‘runFromPackage’ instead of ‘zipDeploy’ like below:-To :-And then run your pipeline to allow zip deployment of Azure Logic app.Reference:-AzureFunctionApp@1 Gives a warning about something it removes itself · Issue #17580 · microsoft/azure-pipelines-tasks · GitHub
|
In my Azure DevOps pipeline, I'm deploying a logic app on Azure but I get this warning:##[warning]"ZipDeploy Validation WARNING: It is recommended to set app setting WEBSITE_RUN_FROM_PACKAGE = 1 unless you are targeting one of the following scenarios:
1. Using portal editing.
2. Running post deployment scripts.
3. Need write permission in wwwroot.
4. Using custom handler with special requirements.
NOTE: If you decide to update app setting WEBSITE_RUN_FROM_PACKAGE = 1, you will have to re-deploy your code."Is it possible to suppress this warning?
|
How to suppress warning "ZipDeploy Validation WARNING: It is recommended to set app setting WEBSITE_RUN_FROM_PACKAGE = 1"
|
You can useansible-vaultto encrypt your file inventory.yml using a password or a password file.Create a file that contains your Vault passwordvault_pass.txtEncrypt your inventory.yml file using the ansible-vault command with the --vault-password-file option:ansible-vault encrypt inventory.yml --vault-password-file=/path/to/vault_pass.txtPush your encrypted inventory.yml file into yout project Gitlab.To run a playbook that uses the encrypted file just add the following:ansible-playbook ansible_roles.yml -i inventory.yml --vault-password-file=/path/to/vaultkeyfileOr you can do the same with --ask-vault-pass which ask you for the password when executing the playbook:ansible-playbook ansible_roles.yml -i inventory.yml --ask-vault-passAnd finaly if you want to decrypt it:ansible-vault decrypt inventory.yml --vault-password-file=/path/to/vault_pass.txt
|
I would like to practice automation with ansible and a ci-cd pipeline and my only problem is that I'm not sure how to reference the password for the user. If possible I would like to avoid using passwords in my inventory.yml since it would be visible in my Gitlab project.
I'm testing out the CI_CD enviroment variables so I can reference them easier.My .gitlab-ci.yml:stages:
- deploy
deploy-job:
stage: deploy
script:
- apk add ansible -v
- apk add sshpass -v
- ls -lah
- mkdir /etc/ansible/
- touch /etc/ansible/ansible.cfg
- touch ~/.ansible.cfg
- echo "[defaults]" >> /etc/ansible/ansible.cfg
- echo "host_key_checking = False" >> /etc/ansible/ansible.cfg
- echo "[defaults]" >> ~/.ansible.cfg
- echo "host_key_checking = False" >> ~/.ansible.cfg
- ansible-playbook ansible_roles.yml -i inventory.yml --extra-vars=$CONTABO_PASSWORDmy inventory.yml file:all:
children:
webservers:
hosts:
Contabo:
ansible_ssh_port: xxx
ansible_host: xxx.xxx.xxx.xxx
ansible_password: $CONTABO_PASSWORD
vars:
became: yes
become_method: sudo
ansible_user: testWhat should I change in the code?
|
How should I pass an ssh password using a gitlab ci-cd variable for my ansible-playbook?
|
It's a known issue / limitation that is being worked on at the moment:Show a setup screen when project has no commitsexp init/exp run: fails if the repo has no root commitexp: improve ui for empty Git reposThe simplest workaround is to create at least a single commit in the repository before running it.
|
I tried initializing DVC and was getting this error .
I am not using GIT actively to track anything although git is initialised . I am trying to create some plotsfrom dvclive import Live
live = Live("evaluation2/metrics")does anyone have any idea on this . I am new to DVC so might be very silly issue.
|
DVC error : scmrepo.exceptions.SCMError: Empty git repo
|
The correctsyntaxto check against a regular expression pattern is=~instead of=-.
|
Committing my code to its Gitlab repository triggers the pipeline (as it should). However, I have an instruction telling it not to run the verify job unless the commit message contains the phrase 'trigger ci'.However, it runs the job even when the phrase isn't in the commit message. Where am I going wrong?include:
- '/gitlab-ci/includes.yml'
- '/idt-test-stub/gitlab-ci.yml'
variables:
ALPINE_VERSION: "3.16"
NODE_VERSION: "14-alpine"
ESLINT_CONFIG_PATH: "./.eslintrc.js"
SOURCE_DIR: "."
RULESET: MAVEN_CI
BUILD_TYPE: MAVEN
MVN_CLI_OPTS: "--batch-mode"
MVN_OPTS: "-Dmaven.repo.local=.m2-local -f wiremock-java/pom.xml"
MAVEN_IMAGE: "maven:3-jdk-11"
stages:
- test
- code-quality
- code-test
- code-analysis
- verify
- transform
- application-build
- image-build
- move-container-tests
- container-image-test
- image-push
.branches: &branches
only:
- branches
todo-check:
<<: *branches
shell-check:
<<: *branches
docker-lint:
<<: *branches
unit-test:
<<: *branches
artifacts:
expire_in: 20 mins
paths:
- ./coverage
verify:
only:
variables:
- $CI_COMMIT_MESSAGE =- /trigger ci/
<<: *branches
stage: verify
image: node:$NODE_VERSION
script:
- npm install
- ./run-verify.sh
tags:
- docker-in-docker-privileged
|
Why is my ignore command being ignored in my Gitlab pipeline yaml file?
|
It may be better to use with..1,..2as arguments. In thepmapcode, the.will take the full dataset columns including themsg2created (thus it is 3 columns instead of 2), but the lambda created had only two argumentslibrary(dplyr)
library(purrr)
Z |>
mutate(msg2 = map2_chr(x, fn, \(x,fn)
paste(toString(x), '->', fn(x))),
msgp = pmap_chr(across(everything()),
~ paste(toString(..1), "->", match.fun(..2)(..1))))-output# A tibble: 2 × 4
x fn msg2 msgp
<list> <list> <chr> <chr>
1 <int [2]> <fn> 9, 3 -> 9 9, 3 -> 9
2 <int [3]> <fn> 6, 2, 9 -> 2 6, 2, 9 -> 2Or if make use of the OP's code and make slight modification to include only the x, fn columnsZ |>
mutate(msg2 = map2_chr(x, fn, \(x,fn)
paste(toString(x), '->', fn(x))), # for 2 args this is the way
msgp = pmap_chr(across(c(x, fn)),
\(x, fn) paste(toString(x), '->', fn(x))) # but this is not an ERROR now
)-output# A tibble: 2 × 4
x fn msg2 msgp
<list> <list> <chr> <chr>
1 <int [2]> <fn> 9, 3 -> 9 9, 3 -> 9
2 <int [3]> <fn> 6, 2, 9 -> 2 6, 2, 9 -> 2
|
One use ofpmapis to deal with situations where one might wantmap3, but such a function does not exist inpurrr. For example:library(tidyverse)
Z <- tibble(x = list(sample(10, size = 2), sample(10, size = 3)),
fn = c(max, min))
Z %>%
mutate(msg2 = map2_chr(x, fn, \(x,fn) paste(toString(x), '->', fn(x))), # for 2 args this is the way
msgp = pmap_chr(., \(x, fn) paste(toString(x), '->', fn(x))) # but can also do this
)(where my examples maps a function of two parameters, so I can actually usemap2; but think of the analogous problem with three parameters).I would like to update my code to use the new native R pipe|>, but the following does not work:Z |>
mutate(msg2 = map2_chr(x, fn, \(x,fn) paste(toString(x), '->', fn(x))), # for 2 args this is the way
msgp = pmap_chr(_, \(x, fn) paste(toString(x), '->', fn(x))) # but this is an ERROR
)What are some options for usingpmapwithmutateand with R's native pipe operator? Or, is this one situation where it makes sense to stick to usingmagritte's pipe (%>%)?
|
Using purrr::pmap with R's native pipe operator
|
I resolved this issue posted by regenerating an RSA key, then secondly, instead of:tr -d '\n' < ~/.ssh/pipeline_rsaI usepbcopy < ~/.ssh/pipelines/id_rsaIn summary, the format of the ssh key was incorrect.It throws a different error now, but I guess this would form part of a different question.
|
I created a new ssh key in my local machine, added the public key to my account settings ssh keys, and the private key to the ci/cd settings of the project.My .gitlab-ci.yml looks like the following:build app:
stage: build
only:
- feature/ci-cd-pipeline-v1
before_script:
- 'command -v ssh-agent >/dev/null || ( apt-get update -y && apt-get install openssh-client -y )'
- eval $(ssh-agent -s)
- echo "$SSH_PRIVATE_KEY" | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- ssh-keyscan $GIT_URL >> ~/.ssh/known_host
- git config user.email "[email protected]"
- git config user.name "CI"
- git remote add acquia $GIT_URL
script:
- echo "Script will runb"
- git checkout -b feature/ci-cd-pipeline-v1
- git push acquia feature/ci-cd-pipeline-v1The goal of this is to push the updated code to my Acquia repository(which also has the ssh public key), but I get the following error when the pipeline runs:$ command -v ssh-agent >/dev/null || ( apt-get update -y && apt-get install openssh-client -y )
$ eval $(ssh-agent -s)
Agent pid 12
$ echo "$SSH_PRIVATE_KEY" | ssh-add -
Error loading key "(stdin)": invalid format
Cleaning up project directory and file based variables
00:01
ERROR: Job failed: exit code 1
|
GiLab pipeline failing to push to another repository
|
There's a known issue when using Python 3.6 on Ubuntu-latest (recently upgraded to default version 22.04):IssueThere are no tarballs for 22.04 for any python 3.6.* versions. Check thislinkfor pre-installed software for 22.04.I am facing same error when using Python 3.6.x on Ubuntu-latest.You could specify the image to Ubuntu-20.04 to use Python 3.6.x.Or use Python 3.10.x on Ubuntu-latest image.
|
- task: UsePythonVersion@0
inputs:
versionSpec: '3.x'
addToPath: true
architecture: 'x64'I get the following error on ubuntu18.04:##[error]Version spec 3.x for architecture x64 did not match any version in Agent.ToolsDirectory.
|
Use Python version v0 task fails in azure pipeline
|
On this Azure Pipelines page, you'll see a TAB called Variables. Click in this tab, select Variables Group and then Link Variable Group. In this way, you can select what variable group you'll use.To use a variable from a variable group, you need to reference a key from the variable group, for example:Example of variable group Development with the key/values:- name: URL
value: http://localhost
- name: ENVIRONMENT
value: DEVIn your pipeline:echo $(URL)
echo $(ENVIRONMENT)Using the Python task:import os
URL = os.environ.get('URL')
print (URL)I recommend you change this pipeline from the Classic mode to Yaml mode; the pipeline will be more modular and reusable.Microsoft doc:https://learn.microsoft.com/en-us/azure/devops/pipelines/library/variable-groups?view=azure-devops-2022&tabs=yaml
|
I tried to find online, but all the options I tried are not working for me.I have an Azure pipeline that runs a Python script, and inside of it I need to fetch a group variable value, this variable contains the banckend Url. I tried multiple ways, but still not working. Could anyone shed a light here?This is the way I am trying to add the value with $(ENVIRONMENT), but I also tried $(env:ENVIRONMENT), $(Environment), $(env:Environment).Thank you in advace.
|
Reference group variable in Azure pipeline
|
The math functions in Data Factory are add, mul, div, etc. You cannot use the *,+,/ literals. Here is the converted expression@concat(
'RANGE:',
string(add(1,mul(sub(1,1),div(int(variables('totalcount')),20)))),
':',
variables('totalcount'),
':50'
)which gives the result "RANGE:1:18000:50". You did not specify the expected result so don't know if that is what you want. The expression I changed will always result in a value of '1' because of the 1-1 part.
|
Trying to pass sum or multiply number to subtract from date dynamically in blue pipeline as below:@concat(
'RANGE:',
1+((1-1)*(variables('totalcount')/20)),
':',
variables('totalcount'),
':50'
)The above expression says Unrecognized expression: 1+((1-1)*(variables('totalcount')/20))
|
Pass number function in azure pipeline dinamically
|
I figured out the problem. Apparently, GitLab has a certain naming convention for the variables. I added a user defined variable in the gitlab-ci config calledSAMPLE-VARIABLE. Turns out you cannot use-in the name of the variable. Renaming it toSAMPLE_VARIABLEdid the trick.However, it would be good for GitLab to give clear error messages rather than throwing such a generic message at the end user and expecting them to figure it out. I spent almost a whole entire day trying out different things to figure out what was exactly going wrong here
|
I've been stuck on trying to run a GitLab pipeline from a branch in my repo and its driving me crazy. I have a sample branch calledtest-branch, but every time I try to run the pipeline manually GitLab shows an error message sayingPipeline cannot be run
Failed to build pipeline!I am the owner of the group hence by inherited permissions, I should be able to run a pipeline on any branch by default. I am able to manually run this same pipeline from the main branch, but for any other branch, it simply won't budge. Here is the.gitlab-ci.yamlfile I am usingstages:
- build_artifacts
workflow:
rules:
- if: $CI_PIPELINE_SOURCE == "push"
when: never
- if: $CI_PIPELINE_SOURCE == "web"
Run-Pipeline:
stage: build_artifacts
script:
- |
echo "Somevalue" > file.json # Just an example script
artifacts:
paths:
- file.jsonHere are some steps I have takes to try and resolve thisChange the actual pipeline stepsDelete and recreate the branch with different codeScoured the pipeline troubleshooting docs, but I don't see this anywhereThis has to be the worst error message GitLab shows, there is no way for me to debug what is going wrong. What am I missing here?
|
Running GitLab pipeline from a branch other than main
|
It is possible topush from a Jenkins pipeline, using theCredentials Binding puginstage('git push') {
steps {
withCredentials([
gitUsernamePassword(credentialsId: 'mygitid', gitToolName: 'Default')
]) {
sh '''
# modify some files
git add .
git commit -m "register work"
git push
'''
}
}
}This assumes you remain on the default cloned branch (usually 'main')
|
Hi I want to push in Jenkins pipeline script.I registered git id/pw in Jenkins credentials.I succeeded git clone.
This is git clone scriptgit branch: "develop", credentialsId: "mygitid", url: "mygiturl"Now I want to commit & push.. but I don't know how to do this..
Anyone have idea?
|
How to git push in jenkins pipeline script?
|
I found:https://about.gitlab.com/handbook/customer-success/professional-services-engineering/education-services/gitlabcicdhandsonlab6.htmlthere is example:deploy review:
stage: review
# only:
# - branches
# except:
# - master
script:
- echo "Do your average deploy here"
rules:
- if: '$CI_COMMIT_REF_NAME == "master"'
when: never
- if: '$CI_COMMIT_TAG'
when: never
- when: always
environment:
name: review/$CI_COMMIT_REF_NAME
deploy release:
stage: deploy
# only:
# - tags
# except:
# - master
script:
- echo "Deploy to a production environment"
rules:
- if: '$CI_COMMIT_REF_NAME != "master" && $CI_COMMIT_TAG'
when: manual
environment:
name: production
deploy staging:
stage: deploy
# only:
# - master
script:
- echo "Deploy to a staging environment"
rules:
- if: '$CI_COMMIT_REF_NAME == "master"'
when: always
- when: never
environment:
name: stagingwhat was expected
|
I have a task:Scaffold out a job policy pattern that uses feature branches and tags
to gate review, release and staging/production job executionbut I don't really understand this question - what should be done here?@Edit
There was an answer from @live but now it's removed for some reason.
Anyway, he wrote:Use GitLab's feature branching and tagging features to manage the
different versions of your code. Whenever you start working on a new
feature, create a new feature branch in GitLab and push your code
changes to that branch. When the feature is complete and ready to be
merged into the main branch, create a new tag in GitLab to mark the
point in the code where the feature was added.Use the .gitlab-ci.yml file to define rules for when each job should
be run. For example, you might specify that the build job should only
be run when code is pushed to a feature branch, and the deploy job
should only be run when a new tag is created.Does it mean just to create a feature branch and then ingitlab-ci.ymlfile add e.gonly:
- masterTo run some stage only for e.g master or other specified branch?
|
Scaffold out a job policy pattern that uses feature branches and tags in Gitlab
|
Something like this might help:.sleeping_job:
needs: deploy-job1
when: on_failure
# do stuff
|
I was wondering and could not find a solution to our problem.We have many CI pipelines running in scheduled times and we need to add a job in case of early job failure.For example lets say that in the attached picture job "deploy-job1" failed (not like in the picture)
We want to have a "sleeping job" that will be activated and run only when a previous job did not succeed.Gitlab pipelineAre there any suggestions on a way to handle this kind of a task?We have tried handle this within the scripts we are running but we want to have general "sleeping job" that will be similar to all stages
|
Active Gitlab CI job only when early job was failed
|
I had the same issue this morning. The root cause was an updated NuGetCommand with an older version of NuGet itself.The solution is manually installing the Nuget package using the Nuget tool installer task- task: NuGetToolInstaller@1
inputs:
versionSpec: 5.xA detailed description can be found here
|
If this is not the correct place for this question, could you please point me elsewhere? (MS forums are not typically helpful, kind-of suck)I am working with trying an Azure Devops 2022 RC [on-prem] to see what problems may occur when we upgrade.With a new pipeline for a test solution targeting Any CPU and.NET 7.0 I am getting this error.##[error]The nuget command failed with exit code(1) and error(NU1201: Project X.Common is not compatible with net70-windows (.NETFramework,Version=v7.0,Profile=windows). Project X.Common supports: net70 (.NETFramework,Version=v7.0)This is 100% a .NET 7 solution, and all of the Nugets are up-to-date.The Build Agent has the latest .NET 7.0 SDKx.Common.csprojx.Common.Abstractions.csproj
|
Azure Devops 2022RC [on-prem] Build Issues with .NET 7
|
Here is a working Pipeline for your example.def data = ["john": "33", "alex": "45", "michael": "22"]
properties([
parameters ([
extendedChoice(
name: 'CHOICE',
description: 'name and age selection',
type: 'PT_SINGLE_SELECT',
value: "${data.keySet().join(',').toString()}"
)
])
])
pipeline {
agent any
stages {
stage('print choice') {
steps {
println params.CHOICE
println data.get(params.CHOICE)
}
}
}
}
|
I am having a somewhat difficult time figuring out how to make a simple Jenkins pipeline to print values from a simple map.I use extendedChoice plugin.The requirement is the following:The user has a dropdown selection of names, once selected a name, job will simply print (in log) its value (.key).this is the code I am trying to work with, made numerous changes and still get various errors and nothing works.if anyone has any idea, will be glad to hear about it :Ddef data = ["john": "33", "alex": "45", "michael": "22"]
properties([
parameters ([
extendedChoice(
name: 'CHOICE',
description: 'name and age selection',
type: 'PT_SINGLE_SELECT',
value: data.key // i think i am writing this wrong.. i need to see names in selection dropdown box
)
])
])
pipeline {
agent any
stages {
stage('print choice') {
steps {
println params.CHOICE.value // how to print .value for user i selected?
}
}
}
}
|
how to configure jenkins extendedChoice parameter to print value from map but see key in selection dropdown
|
I believe not, this is a build-in feature for a build or release summary to view the top failing tests report. This report provides a granular view of the top failing tests in the pipeline, along with the failure details.Disable Test Plan in Project Settings will not affect "Test pass rate" view on the pipeline summary page.If you would like a related setting to control this function's visibility in Azure DevOps UI, create a suggestion ticket via:https://developercommunity.visualstudio.com/AzureDevOps/suggest
|
Is it possible to remove the "Test Pass rate" Tile seen under the Analytics section of a CI pipeline in Azure DevOps?I disabled Test Plan in the project settings and this tile is still showing.I want the Analytics Section to display only the "Pipeline pass rate" and "Pipeline duration".
|
Removing "Test Pass Rate" from the "Analytics" Section in Azure Devops Pipelines
|
exit 1will force the task to fail.I write a demo:trigger:
- none
pool:
vmImage: ubuntu-latest
steps:
- task: Bash@3
inputs:
targetType: 'inline'
script: |
# Write your commands here
exit 1 #This will make the task fail.
# exit 0 #This will make the task success.Result:So put it in the place where you want to make the task fail.
|
In my build yaml pipeline, I have a step.bash to connect to isql and run a select query. I have a requirement to fail/exit the step in case of any issue in isql and retry 2nd time.However, Azure devops is marking step.bash as success and skipping the 2nd retry.Is there anyway to fail the bash.step forcefully?I tried RAISEERROR and azure devops logging commands inside the isql but no luck.here's the step:steps:
- bash: |
isql -U ${{ User }} -P ${{ Password }} -S ${{ Server }} -D ${{ Database }} <<-EOSQL
USE master
GO
Select * from table (If this query fails with error code 102)
IF @@error = 102
RAISERROR 17001 "Unable to login"
GO
EOSQL
echo "##vso[task.setvariable variable=dataset]ok"
displayName: Test dataset
enabled: true
condition: ne(variables['dataset'], 'ok')
continueOnError: true
|
Azure devOps build pipeline: Force fail yaml step.bash in case of isql issue
|
The fact a project is downstream or upstream should not matter in case of aMulti-project pipelines.That uses thetriggerkeywordtrigger-multi-project-pipeline:
trigger: my-group/my-project
|
Does anyone have an example of how you can trigger an upstream project's CI from a downstream project? So if there is a commit to the downstream project, run the pipeline in the upstream project.
|
Gitlab Call Upstream Project from Downstream
|
Azure Pipelines does not have an array parameter type. You can use anobjecttype and iterate over it.User-defined variables arenot typed. A variable is a string, period. You cannot define a variable containing an array of items.However, you could add a delimited list, thensplitthe list.i.e.variables:
foo: a,b,c,d
- ${{ each x in split(variables.foo, ',') }}:
- script: echo ${{ x }}
|
https://medium.com/tech-start/looping-a-task-in-azure-devops-ac07a68a5556Below is my Main Pipeline:pool:
vmImage: 'windows-latest'
parameters:
- name: environment
displayName: 'Select Environment To Deploy'
type: string
values:
- A1
- A2
default: A1
variables:
- name: abc
value: 'firm20 firm201' # How to declare Array variable?
stages:
- stage: stage01
displayName: 'Deploy in Environment '
jobs:
- template: templates/test_task_for.yml # Template reference
# parameters:
# list: $(abc)Now is my template pipeline below:parameters:
- name: list
type: ??? #What type here pls?
default: [] #? is this correct?
jobs:
- job: connectxyz
displayName: 'Connect'
steps:
- ${{each mc in variables.abc}}: # parameters.list
- task: CmdLine@2
inputs:
script: 'echo Write your commands here. ${{mc}}'I need to run CmdLine@2 task multiple timesHow to define values in Main Pipeline abc so that it can be passed as parameter to Template file?What datatype of Parameter so it can work as array?
|
How to create Array Variable of Azure Devops
|
You could set the control option of each task by disablingContinue on errorand running this taskOnly when all previous tasks have succeeded. Then, if any errors are reported in the task, the pipeline will fail. Please refer toTask control optionsfor more information.As for YAML pipeline, please make sure you have addedcontinueOnError: falseto you task. Alternatively, make sure you haven't addedcondition: expressionto each task. Then, it defaults toOnly when all previous task have succeeded.
|
Pipeline is not failing even though the application errors are reported in one of the task. For that i have used failonstderr: true in the pipeline and pipeline detects the errors and start failing. But the problem is failing the pipeline for warnings also. But i am looking for the pipeline should fail when errors are reportedenter image description herenot on warnings. Is there any alternate solution?
|
How to fail the Azure pipeline if any errors are reported in the task
|
Is there a way in Azure devops allows me to run only failed test cases?Yes. I suggest that you can use thererunFailedTestsfeature in Visual Studio Test task. Then it will re-run failed tests.Here is an example:- task: VSTest@2
displayName: 'VsTest - testPlan'
inputs:
testSelector: testPlan
testPlan: xx
testSuite: xx
testConfiguration: 1
rerunFailedTests: true
rerunMaxAttempts: 2Refer to this doc aboutVisual Studio Test Task.rerunFailedTestsRerun failed tests: Selecting this option will rerun any failed tests until they pass or the maximum # of attempts is reached.
|
I Have category with some of test cases for ex(2 test cases, 1 test case passed & the second one failed)
I need to run only the failed test case when run this release for the second timewhen I enabled "re-run" opting in test assemblies, both test cases run but I need to rnu only failed test caseI there a way in Azure devops allows me to run only failed test cases?
|
Is there a way to run only failed test cases in azure devpos?
|
Hi @ilja to do this you may need to change the type ofassignmentoperation thatMemoryDataSetapplies.In your catalog, declare your datasets explicitly, change thecopy_modeto one ofcopyorassign. I thinkassignmay be your best bet here...https://kedro.readthedocs.io/en/stable/kedro.io.MemoryDataSet.htmlI hope this works, but am not 100% sure.
|
Thanks toDavid Beazley's slides on GeneratorsI'm quite taken with using generators for data processing in order to keep memory consumption minimal. Now I'm working on my first kedro project, and my question is how I can use generators in kedro. When I have a node that yields a generator, and then run it withkedro run --node=example_node, I get the following error:DataSetError: Failed while saving data to data set MemoryDataSet().
can't pickle generator objectsAm I supposed to always load all my data into memory when working with kedro?
|
How to use generators with kedro?
|
Ok, I've found the way. Looks like script itself can be defined like this:pipelineJob('name') {
definition {
cps {
script('''
pipeline {
agent {
kubernetes {
}
}
stages {
stage("name") {
steps {
script {
sshagent (credentials: ['name']) {
sh "do something"
}
}
}
}
}
}
''')
}
}
}
|
I'm trying to configure Jenkins as a code, and trying to put pipeline code in configuration. So far I've found in doc (https://github.com/jenkinsci/job-dsl-plugin/tree/master/docs) that I can do it if I pull the script from Git, but since my pipeline is a simple groovy script, I'm trying to figure out how it can be defined in jobs.jcasc.yaml?My pipeline looks like that:pipeline {
agent {
kubernetes {
}
}
stages {
stage("test") {
steps {
script {
sshagent (credentials: ['ssh-key']) {
sh "some code"
}
}
}
}
}
}The only option I've seen is:- script: |
pipelineJob('name') {
description('build')
definition {
cpsScm {
lightweight(true)
scm {
git {
remote {
url("URL")
credentials("key")
}
branch("master")
}
}
scriptPath("jenkinsfile")
}
}
}
|
JCasC. Pipeline code in jobs.jcasc.yaml file
|
Check the following.@NonCPS
def listFiles(def path) {
def filterBakFiles = ~/.*\.bak$/
new File(path).traverse(type: groovy.io.FileType.FILES, nameFilter: filterBakFiles) { file ->
println file
}
}
|
Updated! thanks to @ycrfilesArray = []
def listFiles(def path, def filter) {
def filterBakFiles = ~/(?i)${filter}.*\.bak$/
new File(path).traverse(type: groovy.io.FileType.FILES, nameFilter: filterBakFiles) { file ->
filesArray << file.name
}
if (folder && filter && filesArray) {
return filesArray
} else if (!filesArray) {
return ["No coincidences"]
} else {
return ["please enter folder and filter"]
}
}
listFiles("\\\\networkpath\\${folder}", "${filter}")this works!.
now i'm trying to fix the regex for find coincidences with numbers and text.def filterBakFiles = ~/(?i)${filter}.*\.bak$/best regards
|
listening all files matching with a specific filetype and the name of the file in Groovy
|
Remove the| bc(at the end) and run the command; you're generating one long line of text and then feeding it tobc...scale=3; cat error.log | wc -l / ( cat access.log | wc -l + cat error.log | wc -l ) * 100.0... but this long line of text is just gibberish as far asbcis concerned.One issue, thecat ... | wc -lis just a string of characters; these strings donotactually callcatnorwc.Taking this one step at a time ...$ err_cnt=$(cat error.log | wc -l)
$ acc_cnt=$(cat access.log | wc -l)
$ echo "scale=3; ${err_cnt} / ( ${err_cnt} + ${acc_cnt} ) * 100" | bcAssumingerr_cnt=5andacc_cnt=91this generates:5.200One issue here is that the scale is applied at each step in the calculation which causes the loss of accuracy.Consider a small modification:$ echo "scale=3; ( ${err_cnt} * 100.0 ) / ( ${err_cnt} + ${acc_cnt} )" | bc
5.208While it would be possible to eliminate the 2 variables (err_cntandacc_cnt) by embedding thecut | wcoperations within the echo ...$ echo "scale=3; ( $(cat error.log | wc -l) * 100.0) / ( $(cat error.log | wc -l) + $(cat access.log | wc -l) )" | bc
5.208... this gets a bit cumbersome while also requiring 2 scans of theerror.logfile.
|
Each access to the web server results in either an access log entry or an error log entry. So that the total entries in access log + total entries in error log = total access attempts.
What percentage of connections resulted in errors.... to 3 decimal places.I tried piping the 2 files to do a percentage error, but I get the following errorecho 'scale=3;' 'cat error.log | wc -l' / '(' 'cat access.log | wc -l' + 'cat error.log | wc -l' ') * 100.0' | bc
(standard_in) 1: syntax error
(standard_in) 1: illegal character: |
(standard_in) 1: syntax error
(standard_in) 1: illegal character: |
(standard_in) 1: syntax error
(standard_in) 1: illegal character: |
(standard_in) 1: syntax error
|
What percentage of connections resulted in errors.... to 3 decimal places
|
For completeness, I paste here the kubectl/oc native command for those of us who do not have the tkn cli. Replacetarget-namespaceas needed.Delete failed pipelineruns:kubectl -n target-namespace delete pipelinerun $(kubectl -n target-namespace get pipelinerun -o jsonpath='{range .items[?(@.status.conditions[*].status=="False")]}{.metadata.name}{"\n"}{end}')Delete successful pipelineruns:kubectl -n target-namespace delete pipelinerun $(kubectl -n target-namespace get pipelinerun -o jsonpath='{range .items[?(@.status.conditions[*].status=="True")]}{.metadata.name}{"\n"}{end}')
|
My aspired tekton usecase is simple:successful pipelineruns should be removed after x daysfailed pipelineruns shouldn't be removed automatically.I plan to do the cleanup in an initial cleanup-task. That seems better to me than annotation- or cronjob-approaches. As long as nothing new is built, nothing has to be deleted.Direct approaches:Failed:tkn deletedoesn't seem very helpful because it doesn't discriminate between successful or not.Failed:oc delete --field-selector ...doesn't contain the well hidden but highly expressive fieldstatus.conditions[0].type==SucceededIndirect approaches (first filtering a list of podnames and then delete them - not elegant at all):Failed: Filtering output with-o=jsonpath...seems costly and the condition-array seems to break the statement, so that (why ever?!) everything is returned... not viableMy last attempt istkn pipelineruns list --show-managed-fieldsand parse this with sed/awk... which is gross... but at least it does what I want it to do... and quite efficiently at that. But it might result as brittle when the design of the output is going to change in future releases...Do you have any better more elegant approaches?
Thanks a lot!
|
Tekton: How delete successful pipelineruns?
|
Here is a good example of it in action:% yes | uniq &
[1] 71546
y
% ps gxa | grep yes
71545 pts/0 S 0:05 yes
% kill -PIPE 71545
%
[1]+ Done yes | uniqCallyespiped touniqin a subshell. You see the lone "y" printed and the job back-grounded. Wait a bit then find the PID of theyesprocess still running. Then send the PIPE signal to it. Everything completes cleanly.Looking at thesource for yes, we find no handler for SIGPIPE. So the default action of terminate is used.
|
As per thispost, which says thatThe kernel sends SIGPIPE to any process which tries to write to a pipe with no readers. This is useful, because otherwise jobs likeyes | headwould never terminate.If I understand it correctly, whenyes|headis invoked by the user,yesandheadwould be running in parallel at firstlater,headstops runningyeswould receive a SIGPIPE signal when it tries to write to the pipeAm I right? If I miss something, please let me know.
|
How `yes|head` works?
|
First of all, can we quickly confirm you are on 6.5 or later? If you are on an earlier version, it might be worth upgrading, since S3 pipelines have been significantly improved since then. If you can’t upgrade, you might have to bump pipelines_extractor_get_offsets_timeout_ms, but in 6.5 or later, this should not be necessary.
Otherwise, it could be a bonified a connectivity issue. Make sure the Master Aggregator can connect to your ceph node.
|
Running into a bit of an issue when creating pipelines:CREATE PIPELINE library
AS LOAD DATA S3 ''
CONFIG '{"region": "eu-west-1"}'
CREDENTIALS '{"aws_access_key_id": "", "aws_secret_access_key": ""}'
INTO TABLEtestFIELDS TERMINATED BY ',';ERROR 1970 ER_SUBPROCESS_TIMEOUT_ERROR: Subprocess timed out. Truncated stderr:Upon failure, the command.log on the aggregator says:1958097856 2019-02-05 09:58:25.114 ERROR: write() system call (fd=11) failed with errno: 32 (Broken pipe)
1958097903 2019-02-05 09:58:25.114 ERROR: NotifyAndClose(): Failed writing back to the engine
|
ERROR 1970 ER_SUBPROCESS_TIMEOUT_ERROR: Subprocess timed out when creating a pipeline
|
You can create a parameter in the source dataset and pass the folder name dynamically.Create a folder name parameter in the source dataset.Create a pipeline parameter and pass the value to dataset properties in copy data activity in run time.Copy data activity:
|
I am new to Azure, so I have an SFTP server which has dev,qa and prod data, the user is same but the data is placed in separate folders, can anyone please tell me how to copy the files dynamically, that is I want to fetch the files from the dev folder in the dev environment and so on.
|
How to parameterize the source in Adf pipeline copy activity
|
You can go with the following aggregation$matchto filter the matching output. You may use$orif you need more than one matching output$facetto devide incoming data$groupto group by the matching fildsHere is the code,db.collection.aggregate([
{ "$match": { bus: "EXAMP" } },
{
"$facet": {
"result": [
{
"$group": {
"_id": "$bus",
"bus": { "$first": "$bus" },
"city": { "$first": "$city" },
"AccountName": { "$first": "$AccountName" },
"agencyName": { "$first": "$agencyName" },
"depotName": { "$first": "$depotName" }
}
}
],
aggregate: [
{
"$group": {
"_id": null,
"totalCollection": { $sum: "$Collection" },
"IssueTicket": { $sum: "$IssueTicket" },
"PassengerCount": { $sum: "$PassengerCount" },
"TicketCount": { $sum: "$TicketCount" }
}
}
]
}
},
{
$set: {
aggregate: {
"$ifNull": [
{ "$arrayElemAt": [ "$aggregate", 0 ] }, null ]
}
}
}
])Working MongoPlayground
|
I want to build a MongoDB query that $match on the first stage of the pipeline and then returns an object of object, where the first object is $project of a few common fields and next object is sum aggregation of the fields which are not common, the fields will be mentioned in the pipeline. For example, given 2 documents after the match pipeline -{
"bus": "EXAMP",
"city": "Kanpur",
"AccountName": "Examp Transport Service",
"agencyName": "BBS",
"depotName": "RAYAPUR HUBLI",
"CashCollection": 8,
"Collection": 30,
"IssueTicket": 5,
"PassengerCount": 4,
"TicketCount": 4
}
{
"bus": "EXAMP",
"city": "Kanpur",
"AccountName": "Examp Transport Service",
"agencyName": "BBS",
"depotName": "RAYAPUR HUBLI",
"CashCollection": 10,
"Collection": 20,
"IssueTicket": 7,
"PassengerCount": 5,
"TicketCount": 4
}So I would need projection of fields [bus, city, AccountName, agencyName, depotName] in the first object, and in the next I would need the aggregation of fields [CashCollection, Collection, IssueTicket, PassengerCount, TicketCount]. So my object of object should look like below{
result: [
{
"bus": "EXAMP",
"city": "Kanpur",
"AccountName": "Examp Transport Service",
"agencyName": "BBS",
"depotName": "RAYAPUR HUBLI",
}
],
aggregates: {
"CashCollection": 18,
"Collection": 50,
"IssueTicket": 12,
"PassengerCount": 9,
"TicketCount": 8
}
}
|
MongoDb query to return object of object where the object is aggregation of few fields
|
Yes, that is correct.
It's also a good idea to bake the preprocessing into a pipeline, to avoid the common pitfall of scaling the test and training datasets independently.When callingclf.fit(X_train,y_train), the pipeline will fit the Scaler on X_train, and subsequently use that fit to preprocess your test dataset.See an example at the beginning of the "common pitfalls and recommended practices" documentation.We recommend using a Pipeline, which makes it easier to chain transformations with estimators, and reduces the possibility of forgetting a transformation.So the fact that you don't "use" the Scaler yourself is per design.With that said, if you wanted for some reason to independently access the scaler from a pipeline, for example to check it's values, you could do so:clf.fit(X_train,y_train)
# For example, get the first step of the pipeline steps[0]
# then get the actual scaler object [1]
clf.steps[0][1].scale_
|
I am used to running sklearn'sstandard scalerthe following way:from sklearn.preprocessing import StandardScaler
scaler = StandardScaler().fit(X_train)
scaled_X_train = scaler.transform(X_train)WhereX_trainis an array containing the features in my training dataset.I may then use the same scaler to scale the features in my test datasetX_test:scaled_X_test = scaler.transform(X_test)I know that I may also "bake" the scaler in the model, using sklearn'smake_pipeline:from sklearn.pipeline import make_pipeline
clf = make_pipeline(preprocessing.StandardScaler(), RandomForestClassifier(n_estimators=100))But thenhow do I use the scaler?Is it enough to call the model like I normally would,i.e.:clf.fit(X_train,y_train)And then:y_pred = clf.predict(X_test)?
|
How to use sklearn's standard scaler with make_pipeline?
|
Usingpathlib, and theglobmethod of aPathobject, you could proceed as follows:from itertools import chain
from pathlib import Path
path_1 = Path('data/raw/data2process/')
exts = ["xlsx", "csv", "xls"]
path_1_path_lists = [
list(path_1.glob(f"*.{ext}"))
for ext in exts]
path_1_all_paths = list(chain.from_iterable(path_1_dict.values()))Thechain.from_iterablesallows to "flatten" the list of lists, but I'm not sure Snakemake even needs this for the input of its rules.Then, in your rule:input:
list_of_paths = path_1_all_paths,
other_table = path_2I think thatPathobjects can be used directly. Otherwise, you need to turn them into strings withstr:input:
list_of_paths = [str(p) for p in path_1_all_paths],
other_table = path_2
|
I am sorry for low level question, I am junior. I try to learn snakemake along with click. Please, help me to understand, for this example, how can I put a list of pathes to input in rule? And
get this list in python script.Snakemake:path_1 = 'data/raw/data2process/'
path_2 = 'data/raw/table.xlsx'
rule:
input:
list_of_pathes = "list of all pathes to .xlsx/.csv/.xls files from path_1"
other_table = path_2
output:
{some .xlsx file}
shell:
"script_1.py {input.list_of_pathes} {output}"
"script_2.py {input.other_table} {output}"script_1.py:@click.command()
@click.argument(input_list_of_pathes, type=*??*)
@click.argument("out_path", type=click.Path())
def foo(input_list_of_pathes: list, out_path: str):
df = pd.DataFrame()
for path in input_list_of_pathes:
table = pd.read_excel(path)
**do smthng**
df = pd.concat([df, table])
df.to_excel(out_path)script_2.py:@click.command()
@click.argument("input_path", type=type=click.Path(exist=True))
@click.argument("output_path", type=click.Path())
def foo_1(input_path: str, output_path: str):
table = pd.read_excel(input_path)
**do smthng**
table.to_excel(output_path)
|
snakemake: list of pathes in input
|
You likely need to authenticate your pipeline with Google Cloud. There's a few ways of doing this:UsingGOOGLE_APPLICATION_CREDENTIALSThis is an environment variable that Google applications use to authenticate with Google Cloud. You would:Download a service account keyand store as a file (e.g.my_credential.json).Point theGOOGLE_APPLICATION_CREDENTIALSvariable to this file (i.e.export GOOGLE_APPLICATION_CREDENTIALS=/path/to/my_credentials.json).You're good to go. Run your pipeline!Usinggcloudto log inLog in to your own user account usinggcloud auth application-default login. This will set up the application default login for your session.You're good to go. Run your pipeline!
|
I have developed a wordcount pipeline using apache beam and I am able to run the python code locally on my machine. But while trying to do that on Dataflow getting this error.apitools.base.py.exceptions.HttpForbiddenError: HttpError accessinghttps://dataflow.googleapis.com/v1b3/projects/mw-da-training/locations/%3Dus-central/jobs?alt=json:
response: <{'vary': 'Origin, X-Origin, Referer', 'content-type':
'application/json; charset=UTF-8', 'date': 'Fri, 27 May 2022 11:56:56
GMT', 'server': 'ESF', 'cache-control': 'private', 'x-xss-protection':
'0', 'x-frame-options': 'SAMEORIGIN', 'x-content-type-options':
'nosniff', 'alt-svc': 'h3=":443"; ma=2592000,h3-29=":443";
ma=2592000,h3-Q050=":443"; ma=2592000,h3-Q046=":443";
ma=2592000,h3-Q043=":443"; ma=2592000,quic=":443"; ma=2592000;
v="46,43"', 'transfer-encoding': 'chunked', 'status': '403',
'content-length': '158', '-content-encoding': 'gzip'}>, content <{"error": {
"code": 403,
"message": "Permission denied on 'locations/=us-central' (or it may not exist).",
"status": "PERMISSION_DENIED" } }
|
Google Cloud Dataflow Error | apitools.base.py.exceptions.HttpForbiddenError: HttpError accessing
|
After some playing around with this issue, I have come to a solution.I had originally wrote this file using VSCode, then publishing to GitLab. I finally read somewhere that I really should have been using GitLab's CI/CD editor (in browser). I opened my existing config in GitLab's CI/CD editor, added one space to the end and then removed it. The editor now claimed that my configuration was valid (even though there was no net change). I committed my "change", and the jobs ran flawlessly.
|
I am attempting to setup a pipeline for a .NET 6 project to just run some unit tests. I have a separate project in the same repository (also .NET 6) using xUnit for a unit testing framework. It seems that no matter what I try to change in my.gitlab-ci.ymlfile, I get the same error from the pipeline job. The error I'm seeing is this:GitLab pipeline errorWhen putting my configuration into GitLab's CI Lint tool, it tells me the yml is valid.Here is my configuration:image: mcr.microsoft.com/dotnet/sdk:6.0
stages:
- build
- test
job_compile:
stage: build
before_script:
- "dotnet restore"
script:
- "dotnet build --no-restore"
job_run_tests:
stage: test
script:
- "dotnet test --no-restore"I am having trouble finding the issue here. Any help would be appreciated!
|
GitLab CI failing due to "jobs config should contain at least one visible job"
|
dotnet/core/sdkimage hasapt(notapt-get):$ docker run -ti --rm mcr.microsoft.com/dotnet/core/sdk:latest sh
# apt updateFollowingSonarCube documentation, you can use their docker image with theCLIalready installed:image:
name: sonarsource/sonar-scanner-cli:latest
variables:
SONAR_TOKEN: "your-sonarqube-token"
SONAR_HOST_URL: "http://your-sonarqube-instance.org"
SONAR_USER_HOME: "${CI_PROJECT_DIR}/.sonar" # Defines the location of the analysis task cache
GIT_DEPTH: 0 # Tells git to fetch all the branches of the project, required by the analysis task
cache:
key: ${CI_JOB_NAME}
paths:
- .sonar/cache
sonarqube-check:
stage: test
script:
- sonar-scanner -Dsonar.qualitygate.wait=true
allow_failure: true
only:
- master
|
I have installed SonarQube on a ubuntu machine via a docker image. All working fine and I'm able to log in without issues.Have connected to our GitLab installation and see all available projects, when I try to configure the existing pipeline with the following, I got stuck.I have the following pipeline.yml in use (partially shown here):sonarqube-check:
stage: sonarqube-check
image: mcr.microsoft.com/dotnet/core/sdk:latest
variables:
SONAR_USER_HOME: "${CI_PROJECT_DIR}/.sonar" # Defines the location of the analysis task cache
GIT_DEPTH: "0" # Tells git to fetch all the branches of the project, required by the analysis task
cache:
key: "${CI_JOB_NAME}"
paths:
- .sonar/cache
script:
- "apt-get update"
- "apt-get install --yes openjdk-11-jre"
- "dotnet tool install --global dotnet-sonarscanner"
- "export PATH=\"$PATH:$HOME/.dotnet/tools\""
- "dotnet sonarscanner begin /k:\"my_project_location_AYDMUbUQodVNV6NM7qxd\" /d:sonar.login=\"$SONAR_TOKEN\" /d:\"sonar.host.url=$SONAR_HOST_URL\" "
- "dotnet build"
- "dotnet sonarscanner end /d:sonar.login=\"$SONAR_TOKEN\""
allow_failure: true
only:
- masterAll looking good, but when it runs it gives me this error:$ apt-get update
bash: apt-get: command not foundI just don't know how to fix this and can't find a solution on the internet somewhere
|
SonarScanner fails with apt-get not found
|
This isn't currently easy to do; it's probably another use-case for the"metadata routing" SLEP006.In this example, since you own all the transformers, you might be able to hack something together by just attaching an attribute to the output dataset:class FeatureEngineering(...):
...
def transform(self, X):
...
return_value.metadata = self.des_features
return return_value
class PrepareModel(...):
...
def fit(self, X, y=None):
self.des_features = X.metadata
...
|
I have implemented 3TransformerMixinclasses in an attempt to make my own scikitlearnPipeline. However, I am unable to combine them sincePrepareModelobject uses information fromFeatureEngineeringobject. In particular, consider:cleaner = DataCleaner()
df_clean = cleaner.fit_transform(df)
engineering = FeatureEngineering()
df_engineered = engineering.fit_transform(df_clean)
modelprep = PrepareModel(engineering.des_features)
X = modelprep.fit_transform(df_engineered)Note that each ofDataCleaner,FeatureEngineering,PrepareModelare child classes ofTransformerMixin.How would I make aPipelinewith this setup?from sklearn.pipeline import Pipeline
full_pipeline = Pipeline([('cleaner', DataCleaner()),
('engineering', FeatureEngineering()),
('prepare', PrepareModel())])The issue I have is that the third step needs thedes_featuresfrom the second step?So this does not work.How would I make this work?
|
Scikitlearn machine learning pipeline with passthrough parameters
|
You can achieve this by usingSet-ItResult, which is a Pester cmdlet that allows you to force a specific result. For example:Describe 'tests' {
context 'list with content' {
BeforeAll {
$List = @('Harry', 'Hanne', 'Hans')
$newlist = @()
foreach ($name in $List) {
if (($name -eq "Jens")) {
$newlist += $name
}
}
}
It "The maximum name length is 10 characters" {
if (-not $newlist) {
Set-ItResult -Skipped
}
else {
$newlist | ForEach-Object { $_.length | Should -BeIn (1..10) -Because "The maximum name length is 10 characters" }
}
}
}
}Note that there was an error in your example ($newlistwasn't being updated with name, you were doing the reverse) which I've corrected above, but your test doesn't actually fail for me in this example (before adding theSet-ItResultlogic). I think this is because by usingForEach-Objectwith an empty array as an input theShouldnever gets executed when its empty, so with this approach your test would just pass because it never evaluates anything.
|
I would like to be able to skip tests if list is empty.a very simplified example:No name is -eq to "jens", therefore the $newlist would be empty, and of course the test will fail, but how do i prevent it from going though this test if the list is empty?context {
BeforeAll{
$List = @(Harry, Hanne, Hans)
$newlist = @()
foreach ($name in $List) {
if (($name -eq "Jens")) {
$name += $newlist
}
}
}
It "The maximum name length is 10 characters" {
$newlist |ForEach-Object {$_.length | Should -BeIn (1..10) -Because "The maximum name length is 10 characters"}
}
}fail message:Expected collection @(1, 2, 3, 4, 5, 6, 7, 8, 9, 10) to contain 0, because The maximum name length is 10 characters, but it was not found.
|
Pester test if list is empty do not run the tests
|
To store thetest.csvas a Job Artifact, you can add the following lines to the.gitlab-ci.ymlfile.to_create:
script:
- python to_create.py
artifacts:
paths:
- test.csvFor every run of this job, atest.csvfile will be stored within GitLab.Read more about Job Arifacts, here:https://docs.gitlab.com/ee/ci/pipelines/job_artifacts.html
|
I am running a python script in a pipeline on gitlab, which at the end creates two csv files. The pipeline is successful, however the csv files are not created. I tested the process on my local and the files get created.This is one example of python script which doesn't create the csv file even though the pipeline is successful.import pandas as pd
data = {
"calories": [420, 380, 390],
"duration": [50, 40, 45]
}
#load data into a DataFrame object:
df = pd.DataFrame(data)
print(df)
df.to_csv('test.csv')Here is the configuration of the .gitlab-ci.yml file:to_create:
script:
- python to_create.pyHow can I create the test.csv file through the pipeline run on gitlab?
|
Pipeline passed but csv file not created
|
For a Jenkins job to be triggered by new commits on a repository workspace, or delivered to a stream:TheBuild Definitionuses a schedule indeed:To set up a continuous integration build, set the build schedule to run at an interval, such as every 5 minutes, and ensure that the Build only if there are changes accepted option is selected.If no changes are accepted when the scheduled build runs, the build is deleted.the monitored repo workspace or stream is set as a target to the Build Definition workspace, as shown hereThat means:the build definition has its own repository workspace, which will accept changes from the target workspace/stream every 5 minutesas soon as new commits are accepted by the build definition workspace, the specified Jenkins job is called.
|
I would like my jenkins job to activate as soon as I commit to a specific stream on RTC (or my workspace). Currently I can do this on any commit I do on RTC using the "Build Trigger -> section
Poll the source code management system "but I can only specify the polling time, not the stream or the rtc workspace to monitor.
thank you
|
Jenkins job triggered by commit on specific stream in RTC
|
This can't be done, you can't reference any fields in the query language, What you can do is use$exprwith aggregation operators, like this:db.collection.aggregate([
{
$match: {
$expr: {
$gt: [
{
$size: {
$filter: {
input: "$commitments",
cond: {
$lt: [
"$$this.tracksThisWeek",
"$$this.frequency"
]
}
}
}
},
0
]
}
}
}
])Mongo Playground
|
I'm trying to perform an$elemMatchin a$matchaggregation stage where I want to find if there is a document in an array (commitments) of subdocuments whose propertytracksThisWeekis smaller than itsfrequencyproperty, but I'm not sure how can I reference another field of the subdocument in question, I came up with:{
$match: {
commitments: {
$elemMatch: {
tracksThisWeek: {
$lt: '$frequency',
},
},
},
},
},I have a document in the collection that should be returned from this aggregation but isn't, any help is appreciated :)
|
Referencing another field of subdocument in `$elemMatch`
|
Not exactly sure what you are trying to achieve but here is an example of how you can set variables in a stage and use them as conditions for future steps.The approval check stage I assume runs some checks to validate if the pipeline should run. So if those checks pass, we can set an output variable and then use that variable to condition the download task. If the checks fail then approval=false and the download task will not run.pool: default
variables:
system.debug: true
stages:
- stage: check_package
jobs:
- job: check_package
steps:
- bash: echo "checking package"
- stage: approval_check
jobs:
- job: bash
steps:
# run some checks and if succesful we can set set output variable approved=true
- bash: echo "##vso[task.setvariable variable=approved;isOutput=true]true"
name: approval
- stage: download
dependsOn: approval_check
condition: and(succeeded(), eq(dependencies.approval_check.outputs['bash.approval.approved'], 'true'))
jobs:
- job: download
steps:
- bash: echo "Downloading"
- stage: download2
dependsOn: approval_check
condition:
jobs:
- job: download
steps:
- bash: echo "Downloading"
|
I'm currently working on a azure devops yaml pipeline.
The structure of the pipeline looks something like this:As you can see I have multiple "forks" and in one case I want to bring the "forked ways" back into one. (before the download_from_source stage).
The noapproval stage is kind of unnecessary and I want to delete it.Is there a way to achieve this without having an extra stage?
So it would look something like this?
|
Forking two stages back into one azure pipeline
|
If you check the documentation, pipeline is used when you have multiple commands and want them to execute in the given order. You can get the results in one shot or can use cb style for getting results. As you are using both, you can also useexecon the pipeline chain directly.// Assume ioredis var is a successfully-connected ioredis client
const pipeline = ioredis.pipeline();
pipeline
.hset('mykey', { foo: 'bar' })
.then((err, results) => {
if (err) {
console.error('Error: ', JSON.stringify(err));
} else console.info('Success!');
})
.expire('mykey', 1000)
.then((err, results) => {
// do things here
})
.exec((err, results) => {
// `err` will null always
// result === [[null, 'OK'], [null, 'bar']...]
// result ==== [[err,result],...] from every command you run
});I would recommend you to re-check the documentationhere(check pipelining section). It is not verbose, but once you go through it and check out the examples it will work.
|
I am using theioredislibrary to perform a series of commands within a redis pipeline.Here is a simplified example of what I am trying to do:// Assume ioredis var is a successfully-connected ioredis client
const pipeline = ioredis.pipeline();
pipeline.hset('mykey', { foo: 'bar' })
.then(() => {
pipeline.expire('mykey', 1000, (err) => {
if (err) {
console.error('Error: ', JSON.stringify(err));
} else console.info('Success!');
});
});
pipeline.exec();You'll notice this example has both a promise-chain and some error handling for the second promise. When I perform this logic directly against the ioredis client (no pipeline), it works just fine, but when I use the pipeline, an error of an empty object is thrown.Documentation on ioredis is not very helpful, so any information that can point me in the right direction is highly appreciated!
|
How do I handle errors or perform promise chaining using redis pipeline in nodejs?
|
Looks like golangci-lint isnt installed succesfully or installed in the directory outside of thePATHBy default this installer uses./bindirectory, so you can try./bin/golangci-lint run -c .golangci.yml, or you can useBINDIRvariable to set installation path.
|
I have setup golangci-lint in my development enviroment with configuring makefile,MakeFilebuild: lint_provider
go build -o ${BINARY}
lint_provider:
golangci-lint run -c .golangci.yml
install: build
mkdir -p ~/.terraform.d/plugins/${HOSTNAME}/${NAMESPACE}/${NAME}...
mv ${BINARY} ~/terraform.d/plugins/....bitbucket-pipelines.ymlpipelines:
default:
- step:
image:
hashicorp/terraform:latest
script:
- apk add go
- apk add make
- wget -0- -nv https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s v1.44.2
- make install
- cd terraformprovider/examples/test
- ./testall.shthis pipelining failed with+ make install
golangci-lint run -c .golangci.yml
make: golangci-lint: No such file or directory
make: *** [Makefile:12: lint_provider] Error 127Makefile : 12 isgolangci-lint run -c .golangci.ymlthe same setup is working with the development environment
in the development environment, golangci-lint installed withbrew install golangci-linthow do I execute golangci-lint with bitbucket pipeline environment?
|
Unable to run golangci-lint on bitbuckt CI
|
Why should it be unsafe?! Many ETL jobs are designed to run on weekends.The only thing you must worry about is that you should not keep your computer turned on and logged in with your user while you are not at work.
|
At work, I executed an SSIS Pipeline between 2 databases on 2 different servers and I left for the weekend because it Was taking too long... Now I'm worried if this will produce an issue or whatever because its in debugger mode
Is it safe?
|
Forgot SSIS Pipeline executed
|
Bash doesn't expand variables inside single quotes. You can end the single quotes before the variable and continue afterwards:- >
curl -H 'Content-Type: application/json'
-d '{"text": "http://gitlab.company.pl/mobile/flutter/myproject/-/jobs/artifacts/'"$CI_JOB_ID"'/download?job='"$CI_JOB_NAME"'"}'
https://link.to.my/webhook/eca8(I put the variable in double quotes instead which shouldn't be necessary in this case but is generally good practice.)
|
How do I insert a variable in my.gitlab-ci.yml?I would like to insert it in theafter_scriptsection so I can make a webhook call with the link of the artifact, that is going to be generated with this pipeline.I tried the following, but it just reads it as a string.stages:
- build
flutter_build_android:
timeout: 1h
stage: build
before_script:
- flutter clean
- flutter pub get
script:
- flutter build apk --dart-define=SDK_REGISTRY_TOKEN="${MAPBOX_TOKEN}"
- cp build/app/outputs/apk/release/app-release.apk ./
artifacts:
paths:
- ./app-release.apk
when: on_success
expire_in: 30 days
after_script:
- >
curl -H 'Content-Type: application/json' -d '{"text": "http://gitlab.company.pl/mobile/flutter/myproject/-/jobs/artifacts/$CI_JOB_ID/download?job=$CI_JOB_NAME"}' https://link.to.my/webhook/eca8
tags:
- gradle
- flutter
rules:
- if: '$CI_COMMIT_BRANCH == "rc"'
|
How to insert a variable in a folded string in yaml?
|
You simply need to use thecommon-PipelineVariableparameterwith theSort-Objectcmdletinstead ofGet-VMHostin order to have the pipeline variable reflect thesortedobject sequence.A simplified example:3, 6, 2, 1 | Sort-Object -PipelineVariable value | Select-Object -First 2 |
ForEach-Object { $value }Output:1
2
|
I'm running a series of cmdlets in a pipeline, setting a -PipeLineVariable for each, but I'm getting unexpected results due to the use of a Sort-Object cmdlet higher in the pipeline.For example take the sample code, which is just a portion of my full code. Get-VMhost is pulling random hosts, but my OCD is wanting an alphabetical list and only wanting to select the first 2. As such I am adding a Sort-Object after the Get-VMhost, but that breaks the pipeline variable at the end.get-vmhost -PipelineVariable VMHost | sort Name | select -first 2 | % {write-host $vmhost}
VMHost3
VMHost3Instead, I expected to seeVMHost1
VMHost2I understand this to be a result of some cmdlets such as Sort-Object as having to aggregate all input to process and then breaking the stream. I sort of understand this.Without the use of ...Select -First 2... I will get the whole dataset and I can simply sort as my final step to sort it all. I could just also add the select -first 2 at the end.
I'm just trying to understand the issue and why it happens and if there's an inline workaround up front.***Edit... I have my answer, which is simply to set the pipelinevariable AT the sort statement. Thanks @mklement0
|
Powershell pipelinevariable lost/overwritten when using Sort-Object. alternative?
|
I think you need to use two column transformers, so if you set up the minmax this way:minmax = ColumnTransformer([(
"minmax",
MinMaxScaler(),
["age", "sibsp", "parch", "fare"])
],remainder='drop')The output comes without column names, but based on the column names we input, age will be the first, so:imp = ColumnTransformer([(
"impute",
SimpleImputer(missing_values=np.nan, strategy='mean'),
[0])
],remainder='passthrough')Then into a pipeline:Pipeline([("scale",minmax),("impute",imp)]).fit_transform(dt)
|
I am working on the Titanic dataset and I wish to handle all the preprocessing activities on a pipeline. So, here is my code:To get the dataset!wget "https://calmcode.io/datasets/titanic.csv"And then read it as below:import numpy as np
import pandas as pd
from sklearn.preprocessing import MinMaxScaler, OneHotEncoder
from sklearn.pipeline import Pipeline
from sklearn.impute import SimpleImputer
dt = pd.read_csv("./data/titanic.csv", index_col=["PassengerId"])And then I setup a single pipeline which is suppose to preprocess the numerical features:numerical_features = ["Age", "SibSp", "Parch", "Fare"]
numerical_pipeline = Pipeline(steps=[("min_max_scaler", MinMaxScaler()),
('num_imputer',SimpleImputer(missing_values=np.nan, strategy='mean')])Then fit the pipeline:column_transformer = ColumnTransformer(transformers=[
('numeric_transformer', numerical_pipeline, numerical_features),remainder='drop')
column_transformer.fit(dt)
transformed_dt = column_transformer.transform(dt)But, I need to apply theImputeronly in theAgefeature and not in all the other columns.Currently, it applies the imputer over all the columns.My question is :How can I specify that I need to apply theSimpleImputeronly on the Agecolumn and not in all of thenumerical_pipeline?
|
How to setup the Imputer as part of sklearn pipeline?
|
You can go it as per need using the Github Action and Docker hub only.You should also checkout the keel with GitHub :https://github.com/keel-hq/keelStep: 1name: Stable Build
on:
push:
tags:
- "*.*.*"
...
- name: Set tag in env
run: echo "TAG=${GITHUB_REF#refs/*/}" >> $GITHUB_ENV
...
tags: runq/go-kube:${{ env.TAG }}, runq/go-kube:latestStep : 2Once build is done you can push it to Docker HubStep : 3Keel can auto-update the deployment, but if you don't want that you can each time apply the YAML config from Github action also.Read more at :https://dev.to/achu1612/ci-cd-for-kubernetes-using-github-actions-and-keel-4b7cIf you are planning to use Azure you should checkout :https://github.com/marketplace/actions/deploy-to-kubernetes-cluster
|
We are thinking about migrating our infrastructure to Kubernetes.
All our Source-code is in GitHub, Docker containers are in Docker Hub.I would like to have a CI/CD pipeline for Kubernetes only using GitHub and Docker Hub. Is there a way?If not, what tools (as few as possible) should we use?
|
CI/CD Kubernetes Deployment using Github Actions
|
To resolve thisAn assembly specified in the application dependencies manifest (APItests.deps.json) was not found: package: ‘Microsoft.AspNet.WebApi.Client’, version: ‘5.2.6’ path: ‘lib/netstandard2.0/System.Net.Http.Formatting.dll’error, try either of the following ways:(Thank youJirapong. Posting your suggestion as an answer to help other community members.)You can try upgrading all project dependencies or modifying the installer to include the new files.If the migration is a class library in Azure Functions project then you have to make sure when you runAdd-Migrationwhile the EF Library project is selected as Startup Project.You can refer toAn assembly specified in the application dependencies manifest (appname.deps.json) was not found: package,An assembly specified in the application dependencies manifest (...) was not foundandFixing “An assembly specified in the application dependencies manifest projectname.deps.json was not found”
|
Currently, we are using .Net Core 2.1 Framework for our test automation Web API Project.I am trying to run one of the test project dlls for test cases execution.Please find below the command through Visual Studio IDECommand : dotnet test APItests.dll,The above command works well on my local machine with Visual Studio, test cases are getting executed.I have built the Azure Build pipeline as well as the release pipeline for the same.
Also artifacts are getting published to drop location.But in the release pipeline .Net core test task is failing with the below error.Framework: .Net Core 2.1Error:An assembly specified in the application dependencies manifest (APItests.deps.json) was not found:
package: ‘Microsoft.AspNet.WebApi.Client’, version: ‘5.2.6’
path: ‘lib/netstandard2.0/System.Net.Http.Formatting.dll’Could you please find the YAML file details below.Steps:task: DotNetCoreCLI@2
displayName ‘dotnet custom’
inputs:
command: custom
projects: ‘\TestAutomation.Application.Hosting.WebApi\ApiTests.dll’
custom: vstest
workingDirectory: ‘$(System.DefaultWorkingDirectory)’Could you please have a look once and let me know the suggested solution for the same.
|
Net core VS test task is failing with the error : assembly specified in the application dependencies manifest deps json was not found
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.