Response
stringlengths 8
2k
| Instruction
stringlengths 18
2k
| Prompt
stringlengths 14
160
|
---|---|---|
As per a recent spark.ml training I attended - it was advised to follow this approach:cv = CrossValidator(estimator=lr,..)
pipelineModel = Pipeline(stages=[idx,assembler,cv])
cv_model= pipelineModel.fit(train)Hope this helps!
|
Suppose I have many steps in my feature engineering: I would have many transformers in my pipeline. I am wondering how is Spark handling these transformers during the cross-validation of the pipeline: are they executed for each fold? Would it be faster to apply the transformers before cross-validating the model?Which of these workflow would be the fastest (or is there a better solution)?:1. Cross validator on pipelinetransformer1 = ...
transformer2 = ...
transformer3 = ...
lr = LogisticRegression(...)
pipeline = Pipeline(stages=[transformer1, transformer2, transformer3, lr])
crossval = CrossValidator(estimator=pipeline, numFolds=10, ...)
cvModel = crossval.fit(training)
prediction = cvModel.transform(test)2. Cross validator after pipelinetransformer1 = ...
transformer2 = ...
transformer3 = ...
pipeline = Pipeline(stages=[transformer1, transformer2, transformer3])
training_trans = pipeline.fit(training).transform(training)
lr = LogisticRegression(...)
crossval = CrossValidator(estimator=lr, numFolds=10, ...)
cvModel = crossval.fit(training_trans)
prediction = cvModel.transform(test)Finally, I have the same question with using caching: In2.I could cache training_trans before doing my cross validation. In1.I could use aCachertransformer in the pipeline before the LogisticRegression. (seeCaching intermediate results in Spark ML pipelinefor the Cacher)
|
Is cross-validation faster without using pipelines in spark-ml?
|
You can't do this with 2 channels. In order to perform a send onch2, the value to be sent must already be ready, evaluated.Spec: Send statements:Both the channel and the value expression are evaluated before communication begins.So even if you would do this, which seemingly does not buffer the value in any variable:ch2 <- (<- ch1)Still, the receive fromch1would be evaluated first, and only then would the send be attempted.Which means that value must have already been received fromch1. But you can't tell if anyone is "listening" onch2(meaning if a send onch2could proceed) without attempting to send onch2(and even if you could, there is no guarantee that attempting a send after a check there would still be someone listening on it).
|
Imagine I have a goroutine that reads from one channel and writes to another.ch1 := make(chan int)
ch2 := make(chan int)
go func() {
for num := range ch1 {
ch2 <- num+1
}
}()Ifch2is blocked, the goroutine will still read a value fromch1, effectively introducing a buffer of 1 in the channel. Since I'm using channels for control flow I don't want any buffering.How can I make a pipeline that executes in a completely lock-step fashion?
Or put differently, how can I transfer a value from one channel to the next in one atomic operation?I basically want to wait for bothch1andch2to be at a rendezvous point.
|
How can I transfer a value from one Go channel to another without an implicit buffer?
|
I tired to use ffmpeg and few other workarounds, but the easiest way to do it was :http://answers.opencv.org/question/6976/display-iplimage-in-webbrowsers/Final version should have :_write(client, (char*)"HTTP/1.0 200 OK\r\n", 0);
_write(client, (char*)"Server: Mozarella/2.2\r\n"
"Accept-Range: bytes\r\n"
"Connection: close\r\n"
"Max-Age: 0\r\n"
"Expires: 0\r\n"
"Cache-Control: no-cache, private\r\n"
"Pragma: no-cache\r\n"
"Content-Type: multipart/x-mixed-replace; boundary=mjpegstream\r\n"
"\r\n", 0);Thanks ;-)
|
Let's say I have very simple program which has been written in C++ with usage of OpenCV 3.4 under Windows 10.VideoCapture cap("test.avi");
Mat frame;
while(true){
if (!cap.read(frame))
{
break;
}
// SEND FRAME TO PIPE
}It's just simple example of reading frame by frame avi video, but in the end it's going to be server-side application which produces modified stream from few ip cameras. I want to use html5 video tag to display output directly on website, but it's quite hard to find useful information related with that topic ( for Windows ). If I understand it correctly I need to define pipeline and send there MJPEG stream, with help of FFMPEG, where FFMPEG will create local HTTP server on specific port. Anyone ever challenged similar task under Windows? I guess that 80% of task is related with proper usage of ffmpeg command line tool, one of my priorities is minimal modification of application.So to make long story short, I have application which I can call directly from command line :stream_producer.exe CAMERA_1and I want to be able to see MJPEG stream under :http://localhost:1234which can be displayed on local website in intranet.Regards.
|
Stream frame from video to pipeline and publish it to HTTP mjpeg via ffmpeg
|
Quoting from the shuffle_batch_join TF documentation :The tensors_list argument is a list of tuples of tensors, or a list of dictionaries of tensors. Each element in the list is treated similarly to the tensors argument of tf.train.shuffle_batch().Basically, shuffle_batch_join expects to:Receive a list of tensorsPerform shuffle_batch on each member of the listReturn a list of tensors with the same number and types as tensors_list[i].Be aware that if you use shuffle_batch_join :len(tensors_list) threads will be started, with thread i enqueuing the tensors from tensors_list[i]. tensors_list[i1][j] must match tensors_list[i2][j] in type and shape, except in the first dimension if enqueue_many is true.Here is a link to the doc.
|
Looking at both function signatures, with argumentstf.train.shuffle_batch_join(
tensors_list,
batch_size,
capacity,
min_after_dequeue,
seed=None,
enqueue_many=False,
shapes=None,
allow_smaller_final_batch=False,
shared_name=None,
name=None
)andtf.train.shuffle_batch(
tensors,
batch_size,
capacity,
min_after_dequeue,
num_threads=1,
seed=None,
enqueue_many=False,
shapes=None,
allow_smaller_final_batch=False,
shared_name=None,
name=None
)the only difference is among the atguments isnum_threadsthat denotes intuitively thattf.train.shuffle_batchcould be processed with multiple threads or processes, Except that, they seem to do pretty much the same work.I was wondering if there is a fundamental difference on which someone might choose one over the other except multiprocessing of batches.
|
Difference between tf.train.shuffle_batch_join and tf.train.shuffle_batch
|
Came up with a hack where I had to write the RFormulaModel to the disk and then read the pipelineModel part back in as PipelineModel. From there I have access to the StringIndexerModel stages as shown hereimport org.apache.spark.ml.PipelineModel
import org.apache.spark.ml.feature.StringIndexerModel
rfModel.write.overwrite.save("/rfModel")
val pModel = PipelineModel.read.load("/rfModel/pipelineModel")
val strIndexers = pModel.stages.filter(stage => stage.isInstanceOf[StringIndexerModel])
val labelMaps = strIndexers.map(e => { val i = e.asInstanceOf[StringIndexerModel]; (i.getInputCol, i.labels)})
|
All,I have a simple data frame like belowI am using RFormula api to make a model matrix as belowval formula = "dep ~ indep"
val rF = new RFormula().setFormula(formula).setFeaturesCol("features").setLabelCol("label")
val rfModel = rF.fit(df)where rfModel is of type RFormulaModel. According to the docsherethe mapping of the categorical variable "indep" should be available for access from this object as pipelineModel but this seems to be a private member.My question is how do i get the labels and corresponding indices from the RFormulaModel object? I know I can use the metadata of the transformed dataframe and do string manipulation but is there a straightforward way to do this?Thanks for any help!
|
How do i get the factor to index mappings from RFormula/RFormulaModel in Apache Spark?
|
I made a mistake. My mistake is that when an LDM instruction is followed by RType I stall the processor when the Rtype is in stage3 and LDM is in stage4. but instead, I should detect the dependency one clock before that, when RType is in stage2 (decode) and LDM is in stage3 (exec).In this situation, I should stall the pipeline.So as the result when Rtype is in stage2, second LDM is in stage 3 and first LDM is in stage4, I detect the dependency and stall pipeline one cycle.So in next clock, Rtype is still in stage2, second LDM in stage4 and first LDM is writing back to register and thus because the RType is still in stage2, it can read the data written to the register file. (Write back completes in negedge of the clock. In posedge, first argument of RType is ready.)
|
I am designing a MIPS-like CPU with Verilog and now I'm handling data hazards.
I have these instructions:Ins[0] = LW r1 r0(100)
Ins[1] = LW r2 r0(101)
Ins[2] = ADD r3 r2 r1I'm using pipeline and my dataPath is something like this:I have 5 stages, with 4 latch buffers separating them.The problem is that when ADD instruction reaches stage3 (where the ALU should calculate r1 + r2), instruction 1 (the second LW) is in stage 4 and hasn't yet read the r0 + 101 address of the memory, so I should stall one cycle and after that Ins1 reaches the last stage.In this situation, the first LW has finished its work and the new value of r1 isn't anywhere in dataPath but I need to pass this value to Input B of ALU.(This is called data forwarding because when the third instruction was in stage 2 the value of r1 wasn't ready and I should forward it from later stages (The blue wires which come out from last MUX and go to ALU MUXs), but because of stall of second LW, I don't have the value of r1 furthermore.Thanks for any help.
|
a specific case of data hazard( when a R-Type instruction comes after two consecutive LW )
|
You could use dynamic dependencies. These are dependencies that are know at runtime. Each time youyielda dynamic dependency therun()method will hold until the dependency is done.For example:class RunAll(luigi.WrapperTask):
def requires(self):
return TaskA()
def run(self):
files = json.load(self.input().open('r'))
for file in files:
yield ProcessFileTask(file=file)Also seehttps://luigi.readthedocs.io/en/stable/tasks.html#dynamic-dependencies
|
I have task, which generates which files should be processed:class TaskA(luigi.Task):
def run(self):
# some code which generates list of files into output()
def output(self):
return luigi.LocalTarget(filepath='/path/to/process_these_files.json')And I have wrapper task, which should run TaskA, get parameters, and run processing task with values, which I put into process_these_files.jsonclass RunAll(luigi.WrapperTask):
def requires(self):
files = json.load(TaskA().open('r'))
for file in files:
yield ProcessFileTask(file=file)Any ideas how to do it?
|
Run taskA and run next tasks with parameters, that returned taskA in luigi
|
The documentation says:Theserviceskeyword defines just another Docker image that is run during your job and is linked to the Docker image that theimagekeyword defines. This allows you to access the service image during build time.
|
I am using gitlab's pipeline for CI and CD to build images for my projects.In every job there are configurations to be set likeimageandstagebut I can't wrap my head around whatservicesare. Can someone explain its functionality? ThanksHere's a code snippet I use that I foundbuild-run:
image: docker:latest
stage: build
services:
- docker:dind
script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
- docker build -t "$CI_REGISTRY_IMAGE/my-project:$CI_COMMIT_SHA" .
- docker push "$CI_REGISTRY_IMAGE/my-project:$CI_COMMIT_SHA"
cache:
untracked: true
environment: build
|
What are services in gitlab pipeline job?
|
This is most likely because your ErrorAction is not properly set. Just add -ErrorAction Stop to your Get-ADUser command and see if that corrects the issue. Otherwise it may not activate your catch.To elaborate, the error you're seeing on screen is most likely not a terminating error and so it doesn't activate your catch, so specifying the -ErrorAction Stop will cause non-terminating errors to enter the catch and capture the error for you in the text file. However if you don't want your pipeline to terminate during this, read the edit below.Edit, I realize you may not want your pipeline to terminate when a non-terminating error occurs. If that is the case you'll need change a couple of things.#Start by clearing out errors before you execute your pipeline code. Do not put a try catch on it.
$Error.Clear()
$Users | Get-ADUser -Properties * -ErrorAction SilentlyContinue | select * | Export-Csv export.csv
$Error | Out-File C:\TEMP\errors.txt -Append
|
I am trying get users from Active Directory. The list of the users is presented in file users.txt. I when I do a bulk query I want to export all details in a CSV while I keep the headers (column names). Also I want to handle the errors in file errors.txt.Here is a fragment from my code:$Users = Get-Content .\users.txt
try {
$Users | Get-ADUser -Properties * | select * | Export-Csv export.csv
} catch {
$_ | Out-File errors.txt -Append
}But I got onscreen errors instead in a file. For example:Get-ADUser : Cannot find an object with identity: 'admin' under:
'DC=test,DC=dev,DC=net'.
At line:2 char:14
+ $Users | Get-ADUser -Properties * | select * | Export-Csv export.csv
+ ~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : ObjectNotFound: (admin:ADUser) [Get-ADUser], ADIdentityNotFoundException
+ FullyQualifiedErrorId : ActiveDirectoryCmdlet:Microsoft.ActiveDirectory.Management.ADIdentityNotFoundException,Microsoft.ActiveDirectory.Management.Commands.GetADUserI don't want to use a loop likeforeachbecause I can't append to the CSV file. Also I lose column names.The question is how I can handle errors in a file when I execute$Users | Get-ADUserfor multiple users?
|
PowerShell error handling when using pipe for multiple queries: $Users | Get-ADUser
|
As @Vivek Kumar states, you should implementFeatureUnion()method in order to construct that pipe. It is usually used to concatenate inputs to let the model train on the extended data. So, in your case the pipe should look as the following:def concat(a1, a2):
return np.concatenate((a1, a2), axis=1)
subpipe = Pipeline(
[('concat', FunctionTransformer(concat, kw_args={'a2': X_train[nominalFeatures]})),
('preproc', numPipeline())])
union = FeatureUnion(
[('prep_data', subpipe),
('raw_data', FunctionTransformer(concat, kw_args={'a1': X_train_num}))])
pipe = Pipeline(
[('union', union),
('logreg', LogisticRegression())])Then, you should be able to performpipe.predict(X_test, y)provided X_test is already preprocessed.Quickcheck: I appliednumPipeline()function toX_train[nominalFeatures]and letX_train_numbe as it is. I hope that is what you desire.
|
I have a dataset with numerical and categorical features on which I am trying to fit a classifier. My idea was to preprocess the categorical data first using Pandas such that my dataset can be written as (to borrow MATLAB's concatenation notation)X_train = [ X_train_num, X_train_cat ]andX_test = [ X_test_num, X_test_cat ].To deal with numerical data, I did the following:# define concatenation of arrays so we can assemble the various parts
# that are preprocessed differently in the pipelines
def concat(a1, a2):
return np.concatenate((a1, a2), axis=1)
# pipeline to preprocess, reassemble, and fit our models
trainPipeline = Pipeline([
('preprocessing', numPipeline), # scale numerical data
('assembling', FunctionTransformer(concat, kw_args={'a2' : X_train[nominalFeatures]})), # wrong, but how?
('classifying', LogisticRegression())
])The issue here is that when I pass X_train to the pipeline, it only extracts X_train_num to scale it in the first step, which is why I need to reassemble X_train_num_scaled with X_train_cat = X_train[nominalFeatures] together in the second step. The code above will obviously not work when I use X_test as an input for prediction unless I find a way to access the initial input from the first step and use that in the concatenation step.I have tried to look at trainPipeline.steps[0] and down the list for the initial variable name but found nothing that could help me. What am I missing?
|
Passing the argument from a previous step in sklearn pipelines
|
In your build plan, make sure yougetthe resource your folder is in before you run your task. The basic idea is that resources can be mounted as inputs in your task. Here's a small exampe:resources:
- name: your-folder
type: git
source:
uri: git://git.example.com/your/folder.git
branch: master
jobs:
- name: your-job
plan:
- get: your-folder
- task: do-work
file: task.yml
|
How do I pass a folder to a pipeline with custom code?
To elaborate I have few scripts in a folder (I am aware that this has to go to git) and this folder needs to be passed as input to a task to run the scriptHave added inputs : [Current DIR Name】in the tasks.yml and works fine if I run fly through fly execute command.But if I add this task to a pipeline and run though fly set-pipeline the folder does not get uploaded/added to container. Error message : Missing inputs (fol-name)Any help would be greatly appriciated
|
Concourse CI input as folder to pipeline
|
Luigi support the concepts of resources that will take it when the task is Scheduled and run, and then released when the taks is finished. Why do you need a more advanced approach.From thedoc:This section can contain arbitrary keys. Each of these specifies the amount of a global resource that the scheduler can allow workers to use. The scheduler will prevent running jobs with resources specified from exceeding the counts in this section. Unspecified resources are assumed to have limit 1. Example resources section for a configuration with 2 hive resources and 1 mysql resource:[resources]
hive: 2
mysql: 1In your case you can model this by having i.e.aws : 1. This resource will then be used only once at the time. Now, if what you want is to control how the tasks are scheduled then you could try to use priorities.Inside your task you can then add:resources = {"aws": 1}As far as I know there's no way to directly take resources as this is controlled by the task execution.UPDATED(from the comment):If for example you want to perform certain things (i.e. a cleanup) you might want to use Event handlers for [email protected]_handler(luigi.Event.FAILURE)
def handler_failure(self, exception):
# do cleanupsSee the fulldocumentationfor more info.
|
I have a Luigi pipeline which consists of a graph of tasks that I run in batch. Some of those tasks rely on a costly resource (for example AWS EC2 cluster of machines, or other costly resource).I am trying to use thisresourcein a smart way, so that Iacquireit prior to running the tasks andreleaseit as soon as all tasks have completed. In general the costly resource is allocated in the beginning of the pipeline, and half-way through the dependency graph could very well be released.Is there an efficient way to model this in Luigi, to achieve theaquireandreleaseof the resource?Modelling it in terms ofAquireandReleaseluigi.Tasks is not optimal, because it adds a lot of complexity and unnecessary edges to my graph. Ideally theschedulerwould check itsstateand when there are no moreRUNNINGorPENDINGtasks that require the resource, it couldreleaseit.Does this exist already, or I would have to add this functionality to Luigi myself?
|
Release `resource` in a Luigi pipeline
|
As of ruffus version 2.4, you can use the builtinruffus.cmdlinewhich stores the appropriate flags via thecmdline.pymodule that usesargparse, for example:from ruffus import *
parser = cmdline.get_argparse(description='Example pipeline')
options = parser.parse_args()
@originate("test_out.txt")
def run_testFunction(output):
with open(output,"w") as f:
f.write("it's working!\n")
cmdline.run(options)Then run your pipeline from the terminal with a command like:python script.py --verbose 6 --target_tasks run_testFunction --just_printIf you want to do this manually instead (which is necessary for older version of ruffus) you can callpipeline_printout()rather thanpipeline_run(), usingargparseso that the--just_printflag leads to the appropriate call, for example:from ruffus import *
import argparse
import sys
parser = argparse.ArgumentParser(description='Example pipeline')
parser.add_argument('--just_print', dest='feature', action='store_true')
parser.set_defaults(feature=False)
args = parser.parse_args()
@originate("test_out.txt")
def run_testFunction(output):
with open(output,"w") as f:
f.write("it's working!\n")
if args.feature:
pipeline_printout(sys.stdout, run_testFunction, verbose = 6)
else:
pipeline_run(run_testFunction, verbose = 6)You would then run the command like:python script.py --just_print
|
I've got a ruffus pipeline in Python 2.7, but when I call it with-nor--just_printit still runs all the actual tasks instead of just printing the pipeline like it's supposed to. I:* don't have a-nargument that would supercede the built-in (although I do have other command-line arguments)* have a bunch of functions with@transform()or@merge()decorators* end the pipeline with arun_pipeline()callHas anyone else experienced this problem? Many thanks!
|
Using Ruffus library in Python 2.7, just_print flag fails
|
Well, if you don't have forwarding in your pipeline, the only way of solving this conflict is withtwonoops.1 2 3 4 5 6 7 8 9
I1 IF ID EX MEM WB
I2 IF ID EX MEM [WB]
NOP IF ID EX MEM WB
NOP IF ID EX MEM WB
I3 IF [ID] EX MEM WBYou can clearly see from this rough table that Write Back of I2 and Instruction Decode of I3 are only "aligned" with two noops. I assume your textbook is wrong.
|
I have a sequence of instructions as follows:I1 lw $1, 40($6)
I2 add $6, $2, $2
I3 sw $6, 50($1)The question is:In a basic five-stage pipeline without forwarding, how many noops should be there between I2 and I3?I think the number is 2, while the solution given by the book is 1. Do I miss something? Any clues are appreciated.The question actually is the Exercise 4.13 ofComputer Organization and design, The Hardware/Software Interface Fourth editon.
|
Numbers of no-ops betweeen MIPS instructions
|
Just a simple versionimport pandas as pd
from sklearn import preprocessing
import sklearn
from sklearn.pipeline import Pipeline
from sklearn.pipeline import FeatureUnion
df = pd.DataFrame({'c':['a', 'b', 'c']*4, 'd': ['m', 'f']*6})Define how to select a variableclass ItemSelector():
def __init__(self, key):
self.key = key
def fit(self, x, y=None):
return self
def transform(self, data_dict):
return data_dict[self.key]Nowclassfor encoderclass MyLEncoder():
def transform(self, X, y=None, **fit_params):
enc = preprocessing.LabelEncoder()
encc = enc.fit(X)
enc_data = enc.transform(X)
return enc_data
def fit_transform(self, X, y=None, **fit_params):
self.fit(X, y, **fit_params)
return self.transform(X)
def fit(self, X, y=None, **fit_params):
return selfand pipelineencoding_pipeline =Pipeline([
('union', FeatureUnion(
transformer_list=[
('categorical', Pipeline([
('selector', ItemSelector(key='c')),
('LabelEncoder', MyLEncoder()) ]))
]))
])andX = df
encoding_pipeline.fit_transform(X)
array([0, 1, 2, 0, 1, 2, 0, 1, 2, 0, 1, 2], dtype=int64)If you need to use with algorithm yuo need more details
|
I do as belowimport pandas as pd
from sklearn import preprocessing
import sklearn
from sklearn.pipeline import Pipeline
df = pd.DataFrame({'c':['a', 'b', 'c']*4, 'd': ['m', 'f']*6})
encoding_pipeline =Pipeline([
('LabelEncoder', preprocessing.LabelEncoder())
])
encoding_pipeline.fit_transform(df)and full TracebackTypeError Traceback (most recent call last)
<ipython-input-7-0882633ccf59> in <module>()
----> 1 encoding_pipeline.fit_transform(df)
C:\Program Files\Anaconda3\lib\site-packages\sklearn\pipeline.py in fit_transform(self, X, y, **fit_params)
183 Xt, fit_params = self._pre_transform(X, y, **fit_params)
184 if hasattr(self.steps[-1][-1], 'fit_transform'):
--> 185 return self.steps[-1][-1].fit_transform(Xt, y, **fit_params)
186 else:
187 return self.steps[-1][-1].fit(Xt, y, **fit_params).transform(Xt)
TypeError: fit_transform() takes 2 positional arguments but 3 were givenwhat's wrong? It looks like i have to convert a dataframe before i apply the pipeline
|
Pipeline doesn't work with Label Encoder
|
There are several ways of passing credentials with Cloud Foundry. Putting them in your .yml file is just one option.You can set them manually with the commandcf set-env, as explained here:https://docs.run.pivotal.io/devguide/deploy-apps/environment-variable.html#view-envIf you are afraid of the CLI, Bluemix also allows you to create user-defined environment variable with its GUI :https://github.com/ibm-cds-labs/simple-data-pipe/wiki/Create-a-user-defined-environment-variable-in-Bluemix#use-the-bluemix-user-interfaceI don't want to put username and password(even it's encrypted) in yml fileFYI, the .yml file does not leave your computer/CI server and is just read once by Cloud Foundry.
|
I know I can do this:https://docs.travis-ci.com/user/deployment/cloudfoundryNow in .travis.yml, it will havedeploy:
edge: true
provider: cloudfoundry
username:[email protected]password: supersecretpassword
api: https://api.run.pivotal.io
organization: myawesomeorganization
space: stagingAltough password can be encrypted by runningtravis encrypt --add deploy.passwordI don't want to put username and password(even it's encrypted) in yml file, is there another way for Travis to deploy apps to Cloud Foundry (or IBM Bluemix)?
|
Cloud Foundry Deployment in Travis
|
Use streaming inserts, your limit there is 100.000 items / seconds.https://cloud.google.com/bigquery/streaming-data-into-bigquery
|
I have built a distributed (celery based) parser that deal with about30K files per day. Each file (edifile) is parsed as JSON and save in a file. The goal is to populate a Big Query dataset.The generated JSON is Biq Query schema compliant and can be load as is to our dataset (table).But we are limited by 1000 load jobs per day. The incoming messages must be load to BQ as fast as possible.So the target is: for every message, parse it by a celery task, each result will be buffered in a 300 items size (distributed) buffer . When the buffer reach the limit then all data (json data) are aggregated to be pushed into Big Query.I foundCelery Batchbut the need is for a production environment but is the closest solution out of the box that I found.Note: Rabbitmq is the message broker and the application is shipped with docker.Thanks,
|
Batch processing from N producers to 1 consumer
|
Thelastest versionof Ruffus allows you to output to a new directory:@transform(map_dna_sequence, # Input = previous stage
suffix(".sam"), # suffix = .sam
".bam",
output_dir = "/path/to/a/new_directory")Otherwise, you can change directories using eitherformatter()orregexrather thansuffix. Both of these are considerably more powerful but have more complicated syntax...BTW, it is a good idea to post on the ruffus newsgroup as well.
|
The Ruffus pipeline documentation seems to assume that one's code and data are in the same directory. All the examples have input and output file specifiers without any relative paths. How should one modify the syntax below if, say, the files to be transformed are not in the current directory?@transform(map_dna_sequence, # Input = previous stage
suffix(".sam"), # suffix = .sam
".bam")
|
How to use subdirectories with Ruffus pipelines
|
As Stefan mentioned, you did a little mistake in second case. Proper way of using "ARGF.gets" approach in your case will look like:while input = ARGF.gets
# input here represents a line
endIf you rewrite the second example as above, you will not have difference in behavior.Actual difference you may notice betweenARGF#getsandARGF#each_lineis in semantics:each_lineaccepts block or returns enumerator andgetsreturns a next line if it is available.Another option is to useKernel#gets. Beware it's behavior may differ fromARGF#getsin some cases, especially if you change a separator:A separator of nil reads the entire contents, and a zero-length separator reads the input one paragraph at a time, where paragraphs are divided by two consecutive newlines.But for reading (and then printing) constantly from stdin you may use it as follows:print while gets
|
Using ARGF I can create Ruby programs that respect pipelines. Suppose, I to constantly read new entries:$ tail -f log/test.log | my_progI can do this using:ARGF.each_line do |line|
...
endAlso, I found another way:while input = ARGF.gets
input.each_line do |line|
...
end
endLooks like, both variants do the same thing or there is a difference between them? If so, what is it?Thanks in advance.
|
Difference between 2 ways working with pipes using ARGF?
|
I have been wondering the same thing actually, although my focus was to use this directly from Powershell, not in C#. I thought there would be some attribute or official way of doing this.The terms I've seen used arestallas in "stalling the pipeline" or sometimesbuffer. There are 2 official cmdlets I can think of which do this:Sort-Objectstalls because it needs all of the objects before it can sort them.Format-Table -Autosizestalls because it needs all of the objects before it can figure out how to size the columns.I have come up with this workaround in Powershell:function Stall-Pipeline {
[CmdletBinding()]
param(
[Parameter(
ValueFromPipeline
)]
[String]
$Msg
)
Begin {
Write-Verbose "Begin"
$all = @()
}
Process {
Write-Verbose "Process"
$all += $Msg
}
End {
Write-Verbose "End"
foreach($item in $all) {
# processing
$item # processed item
}
}
}Essentially I am using thebeginblock to initialize a variable that will hold all the results. Theprocessblock adds to that variable, then theendblock does all of the actual processing and sends the items out to the pipeline.You can call this with-Verboseto see when each block is being called.If there is a better way, a more official or supported way, I am very interested in knowing what that is.
|
I am creating a PowershellSystem.Management.AutomationCmdlet.Cmdletfor passing a list of strings through a Pipeline to a Cmdlet this way:[Cmdlet(VerbsCommon.Add, "Signature")]
public class AddSignature : Cmdlet
...
[Parameter(Position = 0, ValueFromPipeline = true)]
public List<string> Items { get; set; }
...Now, in the overloaded "ProcessRecord" method I get only one item at a timeItems.Count == 13 times(instead of getting the full List at once passed through the pipeline).'item1','item2','item3' | Add-SignatureIs there a possibility to pass the whole list of items(returned by Get-ChildItem) at once? I only get one item at a time.Basically I want to have the same behaviour using the pipeline as if I am using the command like this (Items.Count == 3)Add-Signature -Items "item1","item2","item3"Any idea?
|
Cmdlet pass a list of parameters at once through a Pipeline, not individual list items
|
oprint | echoreally shouldn't work, becauseechodoesn't read from the input stream. It echos its arguments. If you want to test a simple pipe,oprint | catwould be more appropriate.Even then, you should add$stdout.flushafter theputswhen you have an infinite loop like that. Since lots of small IO calls can be a performance bottleneck, Ruby buffers its output by default - meaning it stores up lots of little output in a buffer and then writes the whole buffer all at once. Flushing the buffer manually ensures that it won't end up waiting forever to do the actual write.Making those two changes gives me the expected output.
|
I have thisoprintscript:#!/usr/bin/env ruby
amount = 100
index = 0
loop do
index += 1
if index % 5 == 0
amount += 10
end
sleep 0.1
$stdout.puts amount
endIf I runoprint | echo, then I don't see anything. If I comment out thesleep 0.1insideoprint, then I see a lot of output. Doessleepbreak the pipe? Is there a fix?
|
Using `sleep` makes pipe to not work
|
The out of order engine will simply take any ready (ones that aren't waiting on any dependencies) opcodes (instructions that have been broken down into multiple parts) and schedule them for execution. How far it looks ahead depends on how many instructions have been fetched and decoded by the front end.In your out of order example, you won't be able to execute the "INC SI" until after the "MOV DL, [SI]" has ready SI and gone through AGEN (address generation) to load from. The "SUB BX, 4", however, has no dependencies and is ready to get scheduled to execute whenever the HW sees it, etc..
|
Here is an example of out of order pipeline from "The Intel Microprocessor Family" by James Antonakos.Consider this sequence of instructions. The number of clock cycles assigned to each instruction is fabricated for this example.1: MOV AL, 2 ; 1 cycle
2: MOV DL, [SI] ; 3 cycles
3: MUL DL ; 2 cycles
4: INC SI ; 1 cycle
5: SUB BX, 4 ; 1 cycle
6: ADD AX, BX ; 1 cycle
7: MOV CX, 2000 ; 1 cycleScheduling instructionsin orderbetween two pipelines: (I know the basic concept of this.)Clock Cycle Pipeline # 1 Pipeline # 2
1 MOV AL, 5 MOV DL, [SI]
2 idle busy
3 idle busy
4 MUL DL INC SI
5 busy SUB BX, 4
6 ADD AX, BX MOV CX, 2000Scheduling instructionsout of orderbetween two pipelines:Clock Cycle Pipeline # 1 Pipeline # 2
1 MOV AL, 5 MOV DL, [SI]
2 INC SI SUB BX, 4
3 MOV CX, 2000 busy
4 MUL DL idle
5 ADD AX, BX idleCan someone explain to me how out of order pipeline is done? Thank you!
|
Out of order UV pipelines
|
So my solution was to switch back to standard django-style login-signup for the email backend.
I found that it was easier for me to implement the workflow with two different forms and handling cases where user already exist and so on.If you did it with python-social-auth, I would still be interested to hear about your solution but for now i'll use it for other authentication sources, where it pretty much works out of the box as I want it!!
|
I have started to usepython-social-authin a django project to authenticate the users fromfacebook,email, and potentially other sources.
I was able to integrate it to my project, and to create new users with both facebook and email.
I understand the concept of pipeline but something remains unclear to me:How to differentiate login and signup? It seems to me thatpython-social-authhas a single pipeline for both login and signup actions.I have implemented a signup and login (with email) template but for now, both submit the form to the url'/complete/email/'.My loggin form only sends an email and a password but this creates a new user if the email does not allready exist.How would you differentiate both use cases? Should I use thepython-social-authpipeline only for signup and implement a login view for my "log in with email" page as I would do if I were not usingpython-social-auth?Thanks for any answer, experiences on how you did it or further explanation aboutpython-social-authconcepts.
|
python-social-auth difference between login and signup
|
If you are using Cassandra 1.2 or greater, you can useBATCHto wrap up multipleINSERT/UPDATEstatements.For example:BEGIN BATCH
INSERT INTO users (userid, password, name)
VALUES ('user2', 'ch@ngem3b', 'second user');
UPDATE users SET password = 'ps22dhds' WHERE userid = 'user3';
INSERT INTO users (userid, password) VALUES ('user4', 'ch@ngem3c');
DELETE name FROM users WHERE userid = 'user1';
APPLY BATCH;See CQL3 Batch documentation.
|
I am working on an application, where I need to send multiple requests to the cassandra server. The individual requests are kind of a write/read requests, with short interval of execution. I am observing a major bottleneck in round trip time.Can I pipeline the requests to the cassandra to avoid RTT, just like pipelining in Redis.
|
Cassandra : How to send multiple write/read requests in a single client request?
|
When updating from a differect XNA version, such as XNA 3.1 to 4.0, you need to convert your XACT files to the latest version.There is an in depth guidehere, but I'll highlight the points.Make a backup copy of your.xapfile if you would like to keep a copy of the.xapfile from before the upgrade.On a system with XNA Game Studio 4.0 installed, click on the Start menu > All Programs, > Microsoft XNA Game Studio 4.0 > Tools |> Microsoft Cross-Platform Audio Creation Tool 3 (XACT3). This is the latest version and will allow you to update your file.In the XACT3 tool, open the.xapfile from the project that you upgraded to XNA 4. When it loads, you should see the following message in the XACT UI:This project file was created with the March 2009 release of XACT.
You are running the February 2010 release.
If you save this project, it will be saved as the current version
and may no longer work with the version of XACT it was originally
created with.ClickOkto upgrade the.xapproject to the new February 2010 format.Save the.xapproject in the XACT UI and close the XACT UI.Go back to Visual Studio 2010, open the project that you upgraded from XNA Game Studio 3.1 to 4.0 and choose to build it again.Their is alsothis threadover on MSDN saying you can simply convert the.xapby opening it in XACT3, Saving, and rebuilding the project
|
I was looking through files that were installed with XNA framework and after that I got back to programming and I got an error:The .xap file was created with a version of XACT that is incompatible with
the XNA Framework Content Pipeline version used by this project. Refer to the
documentation for options to resolve this mismatch.I don't know what it means. How can I fix it?
|
.Xap file incompatible with XNA Content Pipeline version
|
glDrawPixels should have fragment shaders applied. Figure 3.1 ofpage 203 of the compatibility profilemakes it clear.Note however, that the core profile removes DrawPixels. Which GL version are you using ?
|
I'm confused about the OpenGL pipeline. I have an openGL method where I am trying to use glDrawPixels with a fragment shader, so my code looks like:// I setup the shader before this
glUseProgram(myshader);
glDrawPixels(...);On some graphics cards the shader gets applied, but on others it does not. I've no problem with nvidia, but problems with various ATI cards. Is this a bug with the ATI card? Or is nvidia just more flexible and I'm misunderstanding the pipeline? Are there alternatives to working around this (other than texture mapping)?thanks,
Jeff
|
OpenGL drawPixels with fragment shader
|
I'm pretty certain it just means that a task takes two clocks in unit 0 the second time through. The fact that it takes seven clocks in total alludes to this, 1 in unit0, 1 in unit1, 1 in unit2, 1 in unit3, 2 more in unit0 and finally 1 in unit4.It may well just be a contrived example so that there was a conflict when shifting by one clock (the author had to dosomethingto ensure that task 2 would catch up to task 1 and that seems the easiest solution) or unit0 may well be a non-linear processor of some sort.Another example would have been trying to pump in a task at the point where the previous task was re-entering unit0.What they're trying to show is that, given a maximum duration within a unit ofNcycles in a pipeline, you have to limit your injections of work to one everyNcycles to be sure of no conflict.My bet (based on the small number of authors I know) would be on the author doing the minimal amount of work to describe the problem :-)
|
In the image below, why does task X, appear two times for unit 0 at clock cycles 4 and 5?have to make a program for the arrangement of the pipeline, but I need to know why the above happens to complete it.Is it just because the author wants it to repeat??
|
Why does task X, appear two times for unit 0 at clock cycles 4 and 5?
|
Your guess in a comment is correct - a web site itself also acts as an application and does not require an explicit application subfolder.
|
Just getting started on a project to migrate from win 2003 iis6 to win 2008 / IIS7, and after reading the MS documentation and also various articles I am a little confused, as it states a site needs to have one or more applications.However I have setup a new site pointed at my .Net 3.5 directory and it works.This means that ....A- I am seeing things.
B- A site does not actually need one or more applications.Can anyone explain the above behaviour? and or point me to any useful articles that explain site, applications etc... to me.The app pool is in classic pipeline mode, not sure if this is a problem.Many thanks,
|
IIS 7 Applications and asp.net - newbie question
|
This is very interesting question. In DLTs there can be regular tables, materialized views (for batch queries) and views. I don't think cloning is an option here (at least I wouldn't be comfortable enough that everything will work properly). Also each DLT saves data to separate database so thing may be more complicated.I would suggest an exercise to actually not clone tables but whole DLT pipeline. This would of course be very hard and long process if dataset is big, as new pipeline will need to process all data from beginning. Once new DLT is validated you may switch your views to target this new tables.
|
this a rather complex question that addresses Databricks users only.
Let me recap a bit of the context that produced it.
In the attempt to adopt the Blue/Green deployment protocol, we found good applications of the table cloning capabilities offered by Databricks.
At every new deployment interaction, we clone the existing table into the new system while keeping the old one running.
After the new system has been validated, we update the views so that anything works with no interruption.
Well, but how do we "clone" the tables used in DLT pipelines?
Any help or suggestion is appreciated!
|
Blue/Green Deployment, Table Cloning, and Delta live table pipelines
|
You cansetup a Environment Protection Rulewhich will be called to check whether a release is allowed to proceed.But itrequires setting up a custom GitHub App to block the deployments to the environment base don your own rules.
|
I am pretty new to Git Hub actions. We have a requirement to disable release to PROD on Fridays and keep it enabled for the rest of the days.
Is there a way to achieve that in GitHub Actions(GHA).Thanks!
|
Disabling git hub actions on Fridays for PROD Deployment
|
We had only one artifact under drop folder, below script worked for usdrop_location="_OurProjetc_PipeLine _Alias\drop"
# Find the most recent file in the drop location
latest_file=$(ls -t "$drop_location" | grep '^test-name' | head -n1)
curl -u uname:passord -F file=@"$drop_location\\$latest_file" -F name="test-name" -F force=true -F install=true http://xx.xx.xxx.xx
|
I have a bash script , in which I written some curl commands to pick the artifact from drop folder and deploy. command is similar like belowcurl -u uname:password -F file=@"_path/of/project/all-1.0.1-SNAPSHOT.zip" -F name="test-mname" -F force=true -F install=true http://xx.xx.xxx.xxHere 1.0.1-SNAPSHOT.zip is dynamic and will change per every release. How can I modify the script so can avoid change in pipeline after every release.Thanks
|
Picking Artifact Dynamically From Azure Pipeline
|
This behaviour is not idempotent and is usually a recipe for trouble. What happens if the machine breaks down or the process is killed during the write stage? What happens if a rule is accidentally ran twice?As advised by @Cornelius Roemer in the comment to the question, the safer way is to write to a new file. If the overwrite-like behaviour is desired, then the new file can be moved to the original file location, but some record/checkpoint file should be created to make sure that Snakemake knows not to re-process the file.
|
I am building a snakmake pipeline, in the final rule i have an existing files that i want the snakefile to append to:Here is the rule:rule Amend:
input:
Genome_stats = expand("global_temp_workspace/result/{sample}.Genome.stats.tsv", sample= sampleID),
GenomeSNV = expand("global_temp_workspace/result/{sample}.Genome.SNVs.tsv", sample= sampleID),
GenomesConsensus = expand("global_temp_workspace/analysis/{sample}.renamed.consensus.fasta", sample= sampleID),
output:
Genome_stats="global_temp_workspace/result/Genome.stats.tsv",
GenomeSNV="global_temp_workspace/result/Genome.SNVs.tsv",
GenomesConsensus="global_temp_workspace/result/Genomes.consensus.fasta"
threads: workflow.cores
shell:
"""
cat {input.Genome_stats} | tail -n +2 >> {output.Genome_stats} ;\
cat {input.GenomesConsensus} >> {output.GenomesConsensus} ;\
cat {input.GenomeSNV} | tail -n +2 >> {output.GenomeSNV} ;\
"""how can i solve it?Thank youI tried to do the dynamic() in the output and adding thetouch {output.Genome_stats} {output.GenomesConsensus} {output.GenomeSNV}at the end of the shell. but did not work.whenevr i run the snakemake i get:$ time snakemake --snakefile V2.5.smk --cores all
Building DAG of jobs...
Nothing to be done.
Complete log: .snakemake/log/2023-02-15T123050.937009.snakemake.log
real 0m1.022s
user 0m2.744s
sys 0m2.797s
|
How can I make snakefile rule append the results to the input file of the rule file?
|
When using ops / graphs / jobs in Dagster it's very important to understand that the code defined within a @graph or @job definition is only executed when your code is loaded by Dagster, NOT when the graph is actually executing. The code defined within a @graph or @job definition is essentially a compilation step that only serves to define the dependencies between ops - there shouldn't be any general-purpose python code within those definitions. Whatever operations you want to perform on data flowing through your job should take place within the @op definitions. So if you wanted to print the values of your list that is be input via a config schema, you might do something like@op(config_schema={"table_name":list})
def read_tableNames(context):
lst=context.op_config['table_name']
context.log.info(f'-------------->',type(tableNames_frozenList'))
context.log.info(f'-------------->{tableNames_frozenList}')here's an example using two ops to do this data flow:@op(config_schema={"table_name":list})
def read_tableNames(context):
lst=context.op_config['table_name']
return lst
@op
def print_tableNames(context, table_names):
context.log.info(f'-------------->',type(table_names)
@job
def simple_flow():
print_tableNames(read_tableNames())Have a look at some of theDagster tutorialsfor more examples
|
I am new to the dagster world and working on ops and jobs concepts. \my requirement is to read a list of data fromconfig_schemaand pass it to@opfunction and return the same list to jobs. \The code is show as below@op(config_schema={"table_name":list})
def read_tableNames(context):
lst=context.op_config['table_name']
return lst
@job
def write_db():
tableNames_frozenList=read_tableNames()
print(f'-------------->',type(tableNames_frozenList))
print(f'-------------->{tableNames_frozenList}')when it accepts the list in @op function, it is showing as a frozenlist type but when i tried to return to jobs it conver it into<class 'dagster._core.definitions.composition.InvokedNodeOutputHandle'>data typeMy requirement is to fetch the list of data and iterate over the list and perform some operatiosn on individual data of a list using @opsPlease help to understand thisThanks in advance !!!
|
how to iterate over a list of values returning from ops to jobs in dagster
|
Update: I fixed this error by adding two properties in sonar.project-properties, staying like this:sonar.projectKey=fsoupimenta_cypress-test
sonar.organization=fsoupimenta
sonar.sources=src
sonar.tests=cypress/e2e/spec.cy.js
sonar.typescript.lcov.reportPaths=coverage/lcov.infoYou can seethis repositoryfor more information
|
I tried to pass my code coverage report generated by cypress tests with Github Action, but it arrives at SonarCloud with 0% coverage.
In my pipeline, i get the following warn:WARN: Could not resolve 7 file paths in [/github/workspace/coverage/lcov.info]WARN: First unresolved path: C:\Users\ferso\OneDrive\Documentos\Faculdade\cypress-test\src\App.jsx (Run in DEBUG mode to get full list of unresolved paths)I Already tried to use sed ci utility to correct file paths and use sonar.javascript.lcov.reportPaths=coverage/lcov.info, but it didn't worksThis is my sonar-project.propertie:sonar.projectKey=fsoupimenta_cypress-test
sonar.organization=fsoupimenta
sonar.javascript.lcov.reportPaths=coverage/lcov.info
sonar.javascript.file.suffixes=.js,.jsxand this is my SonarCloud workflow:- name: fix code coverage paths
working-directory: ./coverage
run: |
sed -i 's/\/home\/runner\/work\/cypress-test\/cypress-test\//\/github\/workspace\//g' lcov.info
sed -i 's@'$GITHUB_WORKSPACE'@/github/workspace/@g' lcov.info
sed -i 's/\/home\/runner\/work\/cypress-test\/cypress-test\//\/github\/workspace\//g' sonar-cloud-reporter.xml
- name: SonarCloud Scan
uses: SonarSource/sonarcloud-github-action@master
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} # Needed to get PR information, if any
SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
|
SonarCloud Code Coverage doesn't work with Github Action
|
Check for the existence of the file (-f) and, in positive case, copy it.script:
- |
files=(conf.yaml log.txt)
for file in $files; do
if [[ -f "source_folder/$file" ]]; then
cp source_folder/$file dest_folder
fi
doneTake a look atother answersfor one-shot less-flexible statements.Note: I haven't tested the script above, but I'm quite accustomed with Gitlab pipeline and bash.
|
I have a pipeline that needs to copy some files from a folder to a new one only if the files exists in the source folder.This is my script line:script:
- cp source_folder/file.txt dest_folder/ 2>/dev/nullI have also tried this:script:
- test -f source_folder/file.txt && cp source_folder/file.txt dest_folder/ 2>/dev/nullbut still fails if the file do not exists.Cleaning up project directory and file based variables.ERROR: Job failed: exit code 1How can I check the file and copy it only if exists?EDIT:this command is executed on a server, the pipeline use ssh to log into
|
GitLab pipeline - Copy file if exists
|
Hi @CsNova and welcone on SOThe frst thing is to pipe your output and use tee. This will print the build on stdout AND into the filemvn clean test-compile | tee log.txtThen you can add a step in you pipeline to save this artifact (https://apps.risksciences.ucla.edu/gitlab/help/ci/pipelines/job_artifacts.md#defining-artifacts-in-gitlab-ciyml)pdf:
script: xelatex mycv.tex
artifacts:
paths:
- mycv.pdf
expire_in: 1 week
|
I am currently creating a CI/CD pipeline in GitLab and have some jobs that run maven commands e.g.maven test-compile:
stage: test
script:
- mvn clean test-compileThese are simple console commands but I want to output the logs created by the runner to a file that can be downloaded as an artifact WHILST also keeping the logs in the console whilst the pipeline is running.I attempted the following which output the logs to a file but by directing them to a file the logs were not shown and I had to tail the logs to circumvent this:script:
- mvn clean test-compile > log.txt
- tail -f ./log.txtIs there a simpler way to get around this?Many thanksscript:
- mvn clean test-compile > log.txt
- tail -f ./log.txt
|
GitLab CI/CD Pipeline - How to output the console logs of a job (Maven) as an artifact to download
|
The variable expansion within theparallelkeyword is currently unsupported. There's an openissueregarding this feature request.To work around this limitation, you can use a method that involvesenvsubst, atemplate file, and atrigger using artifacts.# .gitlab-ci.yml
stages:
- process
variables:
SUITES: '["suite1", "suite2", "suite3"]'
template:process:
stage: process
before_script:
- apt-get update
- apt-get install gettext-base
script:
- envsubst '${SUITES}' < template.yml > template.suites.yml
artifacts:
paths:
- template.suites.yml
template:trigger:
stage: process
trigger:
include:
- artifact: template.suites.yml
job: template:process
forward:
pipeline_variables: true
needs:
- job: template:processThetemplate.ymlfile should be present within your repository:# template.yml
stages:
- test
test:
stage: test
parallel:
matrix:
- SUITE: ${SUITES}
script:
- pytest -m ${SUITE} --json-report --json-report-file=${SUITE}_report.json
artifacts:
when: always
paths:
- "${SUITE}_report.json"
expire_in: "1 day"The final pipeline looks like thisHope this helps. 👨🏻💻
|
Trying to configure parallel testing. Is there some way to set up a variable as an array and use it in matrix? For example:stages:
- test
variables:
SUITES: $SUITES
test:
stage: test
image: $CI_REGISTRY_IMAGE
parallel:
matrix:
- SUITE: [$SUITES]
script:
- pytest -m ${SUITE} --json-report --json-report-file=${SUITE}_report.json
artifacts:
when: always
paths:
- "${SUITE}_report.json"
expire_in: "1 day"The case is to run jobs in parallel with suite and artifacts for each job. Maybe, I'm looking the wrong place?
|
How to pass array variables to parallel matrix in GitLab CI pipeline?
|
By default, the runtime engine unrolls (orenumerates) all collection types when feeding output to a dowstream cmdletHowever,ConvertFrom-Jsonin PowerShell versions up to v6.x returns its results in a way thatpreventsthe runtime from enumerating them - so the next cmdlet in the pipeline receives an[object[]]array as a single pipeline item.You can solve this in a number of ways:Nest the initial pipeline:(az container list -o json |ConvertFrom-Json) |Select Name,ProvisioningStateLetForEach-Objectunroll the array on return:az container list -o json |ConvertFrom-Json |ForEach { $_ } |Select Name,ProvisioningStateUse an intermediate variable (as you've already found):$containers = az container list -o json |ConvertFrom-Json
$containers |Select Name,ProvisioningStateUpgrade to a newer version of PowerShellThe default behavior was changed in PowerShell [Core] 7.0if($PSVersionTable['PSVersion'].Major -ge 7){
az container list -o json |ConvertFrom-Json |Select Name,ProvisioningState
}
|
Json conversion with between result> $container=az container list -o json|convertfrom-json
> $container|select name,provisioningstateOutput:name provisioningState
---- -----------------
master Succeeded
pasbackground1 Succeeded
sftp SucceededJson conversion without between result> az container list -o json|convertfrom-json|select name,provisioningstateOutput:name provisioningstate
---- -----------------I would expect the same result here as above.why saving a temporary result brings different results than if the pipe commands are specified in a row.
|
Powershell foreach-object different output - directly from convertfrom-json vs. through a temporary variable [duplicate]
|
It seems like you may be missing building your run config from within the sensor. The configuration you pass to aRunRequestshould contain the resource configuration you want to run the job with, and will look exactly like the run configuration you'd configure from the launchpad. Something like:### defining sensor that triggers the Job
@sensor ( Job = job_pipeline) :
### some calculation
run_config = {
"ops": {
"op1": {
"config": {
"key": "value"
},
}
},
"op2": {
"config": {
"key": "value",
}
},
},
"resources": {
"some_API_module": {
"config": {"key": "value"}
},
"db": {
"config": {"key": "value"}
},
},
}
yield RunRequest(run_key="<unique_value>", run_config=run_config)
|
I am trying to build sensor for the execution of pipeline/graph. The sensor would check on different intervals and executes the job containing different ops. Now the Job requires some resource_defs and config. In the offical documentation I don't see how I can define resource_defs for Job. A small hint would be greatQuestion: where or how do i define resource_defs in sensor ? Do I even have to define it ? its not mentioned in official documentationhttps://docs.dagster.io/concepts/partitions-schedules-sensors/sensors### defining Job
@job(
resource_defs = {"some_API_Module": API_module , "db_Module" : db} ,
config = {key : value }
)
def job_pipeline ():
op_1 () ## API is used as required resource
op_2 () ## db is used as required resource
### defining sensor that triggers the Job
@sensor ( Job = job_pipeline) :
### some calculation
yield RunRequest(run_key = "" config = {key : value} )
|
Define resource_defs in dagster job sensor
|
It's working! :)I followed the migration guide for MonoGame 3.7.1 to 3.8 and now it works again :).https://docs.monogame.net/articles/migrate_37.html
|
I was working on a game on my laptop which I now copied to my desktop (on which I installed Monogame 3.7.1). I can run the build from my laptop on my desktop, but when building it on my desktop (from Visual Studio 2019) it gives the error underneath.I tried:When I double click the Content file it opens up the MGCB tool and I can see the content tree fine and modify it. When I click Build it simply does absolutely nothing (output windows stays blank, like I didn't click, same for clean).When I open the MGCB application and then open the Content file from within the application I can build it just fine (all successful, also when cleaning and rebuilding, everything works fine it seems).When copied the command from the Visual Studio error in CMD I got a message that FreeType6.dll couldn't be loaded (a lot of people got this error). The DLL is in the same folder as MGCB (so it's there). I installed VC++ redis 2012, 2013 and 2015 (as people suggested and which worked sometimes), but that didn't help (I rebooted every time).I installed all fonts I used for both the current (only) user on that pc and for all users, but that didn't help.A lot of people seem to get stuck with the content build. I don't know where to look anymore. Does anyone have an idea on how to (try to) fix this in a systematical way? It's annoying this error keeps popping up stalling projects.Thanks for borrowing you brain!
|
Monogame Content not building from VS
|
Don't use .env as name for config file, use a name without "." Eg: dotenvHere is the issuehttps://github.com/java-james/flutter_dotenv/issues/28
|
I am using.envfile in flutter web.
Everything is OK when I run my project, locally on chrome and build apk.
But when I run my project in azure pipeline to deploy my flutter web, I get these errors:GET https://***.net/assets/fonts/MaterialIcons-Regular.otf 404 (Not Found)
GET https://***.net/assets/.env.develop 404 (Not Found)I tried running my project without .env file and everything was OK.Also, I removeddotfrom the start of file name, from".env"to"dotenv".
I added the file into the assets directory and include the file in thepubspec.yaml.
But didn't solve my issue.
|
Failed to load asset at "assets/dotenv.develop" in flutter web
|
How do I delete the artifacts without deleting the entire pipeline?SolutionDeleting the offending test runsalso deleted the artifacts. :-)Some users may also have todelete the retention policy on the run.
|
I have a security issue with the artifacts in my build pipeline.How do I delete the artifacts without deleting the entire pipeline?Unlike the answers to the related posts below, I do not want to change the retention policy on the Azure DevOps project. The project contains repos I don't have control over.Related posts:The published artifacts cannot be deleted in vNext build definition in TFS 2017How to delete Azure pipeline artifacts after it's finished?
|
How do I delete the artifacts published by my build pipeline?
|
Instead ofremainder='drop'in theColumnTransformerwrite:remainder='passthrough'.As you can see atsklearn documentation, by default, only the specified columns in transformers are transformed and combined in the output, and the non-specified columns are dropped. (default of 'drop'). By specifying remainder='passthrough', all remaining columns that were not specified in transformers will be automatically passed through
|
I am working on the toy dataset withColumnTransformerandpipelinebut I came across the error which I couldn't find a solution on the internet.toy = pd.read_csv('toy_dataset.csv')
toy_drop=toy.drop(['Number','Illness'],axis=1)
toy_target= toy.Illness
toy_target=toy_target.to_frame()Data is imported:rb=RobustScaler()
normalization=MinMaxScaler()
ohe=OneHotEncoder(sparse=False)
le=LabelEncoder()
oe=OrdinalEncoder()
bins = KBinsDiscretizer(n_bins=5, encode='onehot-dense', strategy='uniform')
ct_features=ColumnTransformer([('normalization',normalization,['Income']),
('ohe',ohe,['City','Gender','Illness']),
('bins',bins,['Age']),
],remainder='drop')
pip = Pipeline([
("ct",ct_features),
#("collabel",ct_label),
('lr',LinearRegression())])
x_train,x_test,y_train,y_test=train_test_split(toy_drop,toy_target, test_size=0.2,random_state=2021)
pip.fit(x_train,y_train)I think everything looks clear but this error:ValueError: A given column is not a column of the dataframeoccurred.
|
ValueError: A given column is not a column of the dataframe in pipeline and columntransformer
|
If the gitlab-ci workflow starts by cloning your repository, no amount of git pull will change the fact youalreadyhave the full history, and, at the time of the workflow, this is "already up to date".In other words, a git pull would not be needed in yourgitlab-ci.ymlfile.
|
i trying gitlab pipeline. now i make some changes & code pushed in master branch
pipeline showing already update to date but i have changes in codeI try to pull in three phase but still same issue.gitlab-ci.ymlbefore_script:
- echo "Before script"
building:
stage: build
script:
- git pull origin master
testing:
stage: test
script:
- git pull origin master
deploying:
stage: deploy
script:
- git pull origin master
|
Already up to date gitlab pipeline stackoverflow
|
I would recommend to perform some preprocessing and ETL operations either using Glue/EMR and then use mini-batches to send data to Batch transform.A blog regarding the same can be foundhereThanks,
Raghu
|
I have gigabytes of data for which I want to make predictions usingAWS SageMaker Endpoint. I have two main issues:data comes as Excel files andAWS Batch Transformneeds it to be in JSON format to be able to process it. Reading Excel just to save it as JSON is redundant and it's a big IO slowdownEndpoint can only be invoked over HTTP which means a few MB payload limit - chunking into such small pieces slows things down as wellHow can I tackle these issues?Pipe Modecould be a potential solution but from I read it is used for training only. Is it possible to use Pipe Mode for inference to speed things up?
|
How to process large dataset with AWS Batch Transform
|
It is actually written in the category they are in:$ gst-inspect-1.0 rtph264depay | grep Klass
Klass Codec/Depayloader/Network/RTPSo it is short forPayloaderandDepayloader. Similar to anMuxerandDemuxerorEncoderandDecoderbut nothing here is really encoded/decoded or muxed/demuxed. Just existing data is packaged in a specific format. Sometimes I have seen people calling thatPacketizerandDepacketizeras well.
|
I know some concepts in gstreamer like source, sink, pipeline, and pads.Those are programming concepts and also literal English words, their meanings are related.But I don't know the meaning ofpayordepaylike rtph264depay.From somegst_parse_launchsample commands, it seems that rtph264depay is to receive the data of source.So how to understand the word ofdepay? Pay as in pay some money so depay is like receive money?
|
What's the meaning of 'pay' 'depay' in gstreamer world?
|
I think this answer to your question :How to normalize validation set in GridSearchCV separately from training set?The correct way to do it is :pipe = make_pipeline(StandardScaler(), LinearRegression())
grid = GridSearchCV(estimator = pipe, param_grid = params, cv = kfold)
grid.fit(x,y)Because for each fit, it will fit the scaler on the training folds and transform on the test fold, which is the correct way to scale a dataset.If you do :x = scaler_std.fit_transform(x)Your scaling method will leak informations of the test set during training, which you should avoid.
|
Standartizing data without pipeline:kfold = KFold(3, shuffle = True, random_state = 3)
grid = GridSearchCV(estimator = LinearRegression(), param_grid = params, cv = kfold)
x = scaler_std.fit_transform(x)
y = scaler_std.fit_transform(np.array(y).reshape(-1, 1))
grid.fit(x,y)Placing scaler inside pipeline:pipe = make_pipeline(StandardScaler(), LinearRegression())
grid = GridSearchCV(estimator = pipe, param_grid = params, cv = kfold)
grid.fit(x,y)These approaches gave me different scores.
Which is more "right" ?
|
How/Where to use StandardScaler with GridSearchCV
|
If the argument for applying migrations at deployment time is so that you don't have to remember to do it manually, then I would definitely go with option #1. By making a custom tool, you can also check if the developer forgot to make a migration, and refuse to deploy.With option 2, you could forget to add the SQL migration for a C# migration, and the code will still compile and the deployment will still succeed. It still comes down to the developer having to remember to do extra stuff when they make a migration.
|
I would like to upgrade my databases at deployment time. As I can see I have two choices:Write a tool that calls theMigrate()method as described inhttps://www.thereformedprogrammer.net/handling-entity-framework-core-database-migrations-in-production-part-2/#1b-calling-context-database-migrate-via-a-console-app-or-admin-commandUse thedotnet ef migrations script ... --idempotentcommand to produce one upgrade script as described inhttps://clearmeasure.com/run-ef-core-migrations-in-azure-devops/I like the second approach more, because I do not want to write a tool. However, I have a problem with monolithic scripts. Thedotnet ef migrations scripthas the potential to produce quite a big script. Unless I read the last migration from the database myself.Ideally, I would like to have a Sql script per migration committed to the version control, because producing one monolithic script is going to accumulate cost with time.What is the idiomatic way to produce Sql script per migration, instead of one monolithic script?EDIT 1My preferences may be wrong (because I have little experience with EF Core), so I am open for the first option too.The version control requirement is not a must either, if we can generate the Sql code for just the missing migrations (in the second approach).We use Azure DevOps Server 2019 on prem for the pipeline. Soon to be upgraded to 2020.
|
How can dotnet ef migrations script produce a script per migration?
|
Why not just create two different datasets and use one classifier on each. A simple code like below should be sufficientimport pandas as pd
df = pd.read_cvs('csv_name.csv')
#drop each column in the resp dataset
for_clf_1 = df.drop(['described'],axis = 1)
for_clf_2 = df.drop(['not described'], axis =1)
|
I have a dataset that I need to run through a classification Pipeline. The dataset has 2 types of rows:described:descriptioncolumn POPULATEDnon-desribed:descriptioncolumn EMPTYI want to apply one classifier targetting ONLY the described data, and another one for the non-described data.I am currently doing so by separating the dataset, and then preprocessing and feeding the dataset with their corresponding classifier separately. What I want to accomplish is fitting this process into a Sklearn pipeline. It should be something like this:classifierPipe = Pipeline([('preproc_described', DescPreprocessor),
('preproc_non_described', NonDescPreprocessor),
('clf_described', CLF1),
('clf_described', CLF2)
])
classifierPipe.fit(X_train,y_train)I was reviewingStackingClassifier, but according to the documentation, initial estimators are applied to all the rows in the dataset.How can I create such a pipeline where each classifier targets a specific subset of the whole dataset?
|
Apply two different Sklearn classifiers to two different subsets of the same data
|
AFAIK, jenkins starts a sparate process for the bat () step and waits for it to finish. Also, communication between jenkins and the bat process is not possible: you trigger the script and you read the returned value and, if needed, stdout.Additionally, I do not know if it is possible what you want, because I do not knowKatalonat all. What you want requires Katalon waiting for a result from the python script and then, when this results reach Katalon, it should resume its execution.I recommend you to first try the processwithoutJenkins: Create a windows script that does exactly what you want. If you are able to do that, you can then callthat newscript from jenkins, giving input or reading outputs as needed. Even as you suggested, using files for that.
|
We have two scripts (one for Katalon and one for Python) that we want to launch from Jenkins.
First we want to launch Katalon and, at a certain point in the script, tell Jenkins that launch the python script. Then finished the python script, Jenkins should tell katalon that can continue.Current jenkins pipeline code:"pipeline {
agent any
stages {
stage('Unit Test') {
steps {
echo 'Hello Example'
bat """./katalon -noSplash -runMode=console projectPath="/Users/mypc/project/proyect1/example.prj" -retry=0 -
testSuitePath="Test Suites/IOS/TestSuiteAccount" -executionProfile="default" -
deviceId="example" -browserType="iOS" """
sleep 5
}
}
stage('Unit Test2') {
steps {
echo 'Start second test'
bat """python C:\\Users\\myPC\\Documents\\project\\project-katalon-code\\try_python.py"""
sleep 5
}
}
}
}"In pseudocode it would be the following:Katalon script:my_job()
call_jenkins_to_start_python()
if jenkins.python_flag == True
my_job_continue()Pipeline Jenkins script:Katalon.start()
if katalon_sent_signal_to_start_python == True
start_python_job()
if python_finished_job_signal == True
send_katalon_signal_to_continue()Would be a good solution to read/write an external file? Didn't find anything similar.Thank you!
|
How to read flags on jenkins jobs
|
While you did not provide you code I will answer your question based in your explanation.First, regardingDoFn.start_bundle(), this function is called for every bundle and it is up for DataFlow to decide the size of these, based on the metrics gathered during execution.Second,DoFn.setup()is called once per worker. It will be only called again if the worker is restarted. Moreover, as a comparisonDoFn.processElement()is called once per element.Since you need to refresh your query twice per week, it would be the perfect use forSideInputusing"Slowly-changing lookup cache". You can use this approach when you have a look up table which changes from time to time. So you need to update the result of the lookup. However, instead of using a single query in batch mode, you can use streaming mode. It allows you to update the result of the lookup (in your case the query's result) based on a GlobalWindow. Afterwards, having this side input you can use it within your main stream PCollection.Note:I must point that as a limitation sideInputs won't work properly with huge amounts of data (many Gbs or Tb). Furthermore,explanationis very informative.
|
I have a streaming pipeline where I need to query from BigQuery as reference for my pipeline transform. Since BigQuery tables are only changed in 2 weeks, I put the query cache in setup() instead of start_bundle(). From observing logs, I saw that start_bundle() will refresh its value in DoFn life cycle around 50-100 element process but setup() will never be refreshed. Is there any way to deal with this problem?
|
How long beam setup() refresh in python Dataflow DoFn life cycle?
|
Try to replaceCMD ["npm","start"]by this :RUN setx path "%path%;C:\node-v12.10.0-win-x64"
|
I'm trying build .net core 3.1 Angular app in Docker using Azure Pipeline. This is my Dockerfile:FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
RUN echo "Downloading NodeJS ..." && \
curl "https://nodejs.org/dist/v12.10.0/node-v12.10.0-win-x64.zip" --output nodejs.zip && \
echo "Expanding NodeJS ..." && \
tar -xvf nodejs.zip -C "C:\\"
RUN CD "C:\node-v12.10.0-win-x64" && \
ECHO "npm install ..." && \
npm install
CMD ["npm","start"]
FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build
WORKDIR /src
COPY ["WebUI/WebUI.csproj", "WebUI/"]
RUN dotnet restore "WebUI/WebUI.csproj"
COPY . .
WORKDIR "/src/WebUI"
RUN dotnet build "WebUI.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "WebUI.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "WebUI.dll"]I get error:C:\src\WebUI\WebUI.csproj(134,5): error MSB3073: The command "npm install" exited with code 9009.How can I install Node.js correctly?
|
.Net core Angular app running in a Docker Windows container
|
I still don't have the answer but I was able to resovle this issue by moving all the curl commands in a bash script file and executed it.
|
I am running an azure devops pipeline to install and configure ELK. So there is a shell script which executes all the commands to install ELK and configure using curl commands. But at the end of the file last 4-5 commands are not executed and I can see truncated scripts in the log.2020-06-12T08:56:10.2856017Z > echo "Registering the azure reposi
2020-06-12T08:56:10.2856671Z > curl -X PUT -uadmin:"***" "https://10.XXX.X
2020-06-12T08:56:10.2856981Z > echo "Setting up snapshot backup poli
2020-06-12T08:56:10.2857523Z > curl -X PUT -uadmin:"***" https:
2020-06-12T08:56:10.2857807Z > echo "Finished configuring Kibana"I have also swaped these scripts with other scripts above it which were executed successfully but again the scripts which were successfull now starts getting truncated. I am not sure what I am doing wrong. Kindly help. Thanks in advance.
|
In Azure pipeline linux commands getting truncated at the end of the script file
|
Copy activity only could be used for data transmission,not for any other aggregation feature. So @activity('copyActivity1').output won't help. Since you said you can't use lookup activity, i'm afraid your requirement is not available so far.If you prefer not using additional activities, I suggest you using Data Flow Activity instead which is more flexible.There isbuilt-in aggregationfeature in the Data Flow Activity.
|
I have a copy data activity for on-premise SQL Server as source and ADLS Gen2 as sink. There is a control table to pickup tableName, watermarkDateColumn and the watermarkDatetime to pull incremental data from the source database.After data is pulled/loaded in sink, I want to get the max of the watermarkDateColumn in my dataset. Can it be obtained from@activity('copyActivity1').output?I'm not allowed to use one extra lookup activity to query the source table for getting the max(watermarkDateColumn) in pipeline.
|
How to get max of a given column from ADF Copy Data activity
|
First of all, Ansible playbooks can be very resource intensive. Especially when running against many hosts and/or using process forks, caching, etc.It's common to see Ansible process allocating a lot of system memory. This can lead to an out-of-memory situation. Then the operating system picks and kills a running process in order to free the memory. This might affect your running Jenkins or Ansible.Check your system logs for these out-of-memory exceptions.For Linux usedmesg -T | grep "Out of memory"to filter out the relevant exceptions.
|
I have triggered a build for my app in jenkins and it got failed by returning the following errorhudson.AbortException: Ansible playbook execution failedThen I have reverted my changes and triggered it again even then the same error appeared with status fail.Then I have triggered a build for other branch of the same project but it got success. I am new to Jenkins. Can anyone please help me understand the situation?
|
hudson.AbortException: Ansible playbook execution failed jenkins
|
you are missing entry forreferencedParameters: ''properties([
parameters([
[$class: 'DynamicReferenceParameter',
choiceType: 'ET_FORMATTED_HTML',
omitValueField: true,
referencedParameters: '**ADD value here**',
description: 'Test',
name: 'TEST',
randomName: 'choice-parameter-46431548642',
script: [
$class: 'GroovyScript',
fallbackScript: [
classpath: [],
sandbox: true,
script:
'return[\'Could not get any info\']'
],
script: [
classpath: [],
sandbox: false,
script:
'''
return "<input name=\\"value\\" value=\\"Test\\" class=\\"setting-input\\" type=\\"text\\">"
'''
]
]
]
])
])
|
I'm trying to create Active choice HTML parameter with default value in jenkins pipeline and I can't find a problem.properties([
parameters([
[$class: 'DynamicReferenceParameter',
choiceType: 'ET_FORMATTED_HTML',
omitValueField: true,
referencedParameters: '',
description: 'Test',
name: 'TEST',
randomName: 'choice-parameter-46431548642',
script: [
$class: 'GroovyScript',
fallbackScript: [
classpath: [],
sandbox: true,
script:
'return[\'Could not get any info\']'
],
script: [
classpath: [],
sandbox: false,
script:
'''
return "<input name=\\"value\\" value=\\"Test\\" class=\\"setting-input\\" type=\\"text\\">"
'''
]
]
]
])
])It's working fine when I use:return "<input name=\\"value\\" value=\\"\\" class=\\"setting-input\\" type=\\"text\\">"without a value, but I got empty field then.
Any ideas?
|
Active choices reactive reference parameter as formatted html in jenkins pipeline with default value
|
The error you describe will only happen if you retrain your OneHotEncoder, which you should not do in an automated process. You should train your OneHotEncoder just like any other ML model once on your training dataset and then apply this trained encoder to whatever new data you want to feed through your automated pipeline.Example:import pandas as pd
from sklearn.preprocessing import OneHotEncoder
df1 = pd.DataFrame({"cat_col":["a","b","c"]})
df2 = pd.DataFrame({"cat_col":["a","b"]})
ohe = OneHotEncoder(handle_unknown="ignore")
print(ohe.fit_transform(df1).toarray())
print(ohe.transform(df2).toarray())This will return[[1. 0. 0.]
[0. 1. 0.]
[0. 0. 1.]]
[[1. 0. 0.]
[0. 1. 0.]]
|
I'm trying to automatize my training , predict process.
But i have a problem with one hot encoding .Let's say i have a column that look like this :column /
a /
b /
c /If i onehot encode it i'll get 3 columns for each letters , but if later after i download some new data , in this same column i have only a and b , the column named "column_c" won't be created and so i cannot predict using the model because of the shape , i'll have 2 columns instead of 3.How can i fix that ?Thank you
|
Pipeline production oneHotEncoding
|
As the name suggests,Canary Dagis not supposed to do anyreal workit is just a dummy-dag running to testify uptime of Airflow schedulerWith above points in mindI thinkDummyOperator(do nothing, but merely execution of task is enough to give us a clue),PythonOperator(printsomething) &BashOperator(echosomething) are good contenders for tasks that must comprise Canary Dagrobinhood guyssay they write..tasks which perform very simple actions such as establishing database connections..
|
My team has multiple airflow jobs. The job which run frequently are getting scheduled and running properly, But the jobs which are run rarely often are not scheduled and are skipped.I would mainly like to create a health checking DAG to be alerted when my jobs are not scheduled and not run.
|
How to implement a Canary DAG in Airflow for health checks of other jobs?
|
You can usegeneric-webhook-plugin,For instance, GitHub webhooks in Jenkins are used to trigger the build whenever a developer commits something to the branch, in each webhook we have following infogit repository name
branch which was changed
commit id
commit message
commit author
etc ...To avoid loopsave Jenkins commit ID to file and add the file to git ignoreread remote commit ID from push using generic webhook triggercompare local file commit ID with Remote commit IDIf same it's mean commit from Jenkins, if not it's mean commit is not from JenkinsHere is the snippet that may help or change accordingly, it will not create look#!/bin/bash
webhook_commit_id=$commit
commit_by_jenkins=commit_by_jenkins.txt
if [ ! -f $commit_by_jenkins ]
then
echo "creating local file name commit_by_jenkins.txt, please add this file to git ignore"
touch commit_by_jenkins.txt
fi
jenkins_commit=`cat commit_by_jenkins.txt`
if [ "${webhook_commit_id}" == "${jenkins_commit}" ]; then
echo "commit by jekins server, ignoring commit"
else
echo "commiting code from jenkins servver"
git add -A && git commit -m "commit by jenkins server" && git rev-parse HEAD > commit_by_jenkins.txt
fi
|
I'm attempting to create a Jenkins pipeline which does the following steps at a high level:Send build notificationsRun lint testsRun CI testsUpdate version information in a couple of filesUse git to commit the updates to the files, create tags, and push changes to originUpload files in project to another systemSend notificationsI want this pipeline to execute when a commit happens to specific branches. I have this working now, but the issue is when the job commits the new changes during the build (in step 5 above), it launches a new build and essentially enters an infinite loop.I know this is working by design right now, but is there anyway way to prevent a new build job from executing? Can I do something within the Jenkins pipeline to prevent the new commit from launching a new Jenkins job, or would this require a whole rework of the workflow?
|
Prevent Jenkins job from launching additional jobs with git commit
|
You can try using FeatureUniondef blank(df):
return df
subpipe = FeatureUnion(
[('prep_data', Function_transformer(blank)),
('feats_A', Function_transformer_A())])
features = Pipeline([
('subpipe', subpipe)
('feats_B', Function_transformer_B())
])
|
I have a sklearn pipeline like the following:features = Pipeline([
('feats_A', Function_transformer_A())
('feats_B', Function_transformer_B())
])
features.fit(X)The input tofeats_Ais the fitted dataX. And, the input tofeats_Bis the output fromfeats_A.Instead, I want to be the input tofeats_Bthe fitted dataXand the output fromfeats_A, together. Given that, these two different data matrices could have different dimensions;Function_transformer_Aapplies aggregation to process the input data.Is it possible?
|
input to sklearn pipeline from previous step and from the fitted data
|
For such kind of transformation you've better usemutate()And pipeline framework allow you to avoid using Data$ for each field.
If we pretend that Data looks like this, you can try:Data <- tibble(Ended=c('23-04-2019 00:00:00', '23-04-2019 01:00:00',
'24-04-2019 00:00:00', '24-04-2019 01:00:00'))
Data <- Data %>%
mutate(Ended=dmy_hms(Ended)- hours(4))
|
I'm trying to write a pipeline that parses a date vector and subtracts 4 hours from each value. Here's some sample data:structure(list(Created = c("24/04/2019 05:03:45", "24/04/2019 05:03:47",
"24/04/2019 05:03:56", "24/04/2019 05:04:00", "24/04/2019 11:51:57",
"24/04/2019 05:58:21", "23/04/2019 10:36:24", "24/04/2019 01:33:53",
"23/04/2019 18:44:50", "23/04/2019 18:25:19"), Ended = c("Â",
"Â", "Â", "Â", "24/04/2019 12:20:26", "24/04/2019 11:51:57",
"23/04/2019 10:51:21", "24/04/2019 05:03:56", "24/04/2019 01:33:53",
"23/04/2019 18:44:50")), row.names = c(NA, 10L), class = "data.frame")This works:Data$Ended <- dmy_hms(Data$Ended)
Data$Ended <- Data$Ended - hours(4)But this first step doesn't:Data$Ended %>% dmy_hms()I get this Warning message:
All formats failed to parse. No formats found.
|
How can I use lubridate parsing function with a R pipeline?
|
We have thesamescenario where playbooks are sharing roles.We looked at:tagssubmodules (yikes)subreposIn the end, we went for neither: each of your playbooks include, as aroles/folder a symlink to parent folder ofallour roles.That way, a playbook always reference the latest version (currently checked out) of a role.ansible keep referencing a specific SHA1 of a playbook, but the consensus remains: always use the latest of the roles. Without managing any specific Git references (when it comes to Git repos for each roles)
|
I am working on a gitlab project with its own git repository with Ansible roles.The thing is that we also have some shared roles that are maintained by another team. These roles are in a separate project, and each role is like a subproject with its own repository. Right now in our pipeline we do a git clone of those roles from the runner, execute the code and remove the clone again.While this mechanism works well, the approach causes an issue. Say we deploy to development, test and acceptance, and then the shared roles get updated by the other team. By the time we deploy to production we will get inconsistencies as the shared roles that are cloned have changed.So I thought of introducing git tags. Every commit on the shared roles get tagged with a version number. We can then do a git clone, and a checkout of the version number.I am struggling however to get my head around how to implement this in the pipeline. Is there a way to do this without having to hard-code version numbers in the pipeline?
|
git clone and checkout specific tags
|
I haven't tried scheduling pipelines yet, but hopefully this may help.From the Kubeflow Pipelines UI, create an experiment for your pipeline. On the experiment page for your pipeline, there is an option toCreate recurring run. Follow the instructions on that form to schedule runs for your pipeline.
|
I have pipelines on Kubeflow pipelines can be run on the pipelines UI.My pipelines should be executed at the specified time like crontab.How can I execute the pipelines periodically?
|
How to run Kubeflow pipelines periodically?
|
Did you yieldTaskBin yourrun()method, or inrequires()? I ran into this issue when yielding in therun()method. It turned out that the task it was yielding was broken for other reasons, but Luigi wasn't very clear about propagating the full exception stack for tasks yielded inrun().To yelp debug, try callingluigi.build([TaskB])and seeing if that throws an exception. If you can provide the full exception stack that would be helpful.
|
I spent a good five or six hours the other day trying to parallelize some work in Luigi, based on the method used here:http://rjbaxley.com/posts/2016/03/13/parallel_jobs_in_luigi.htmlThe problem I was having was that I kept getting a luigi.task_register.TaskClassAmbigiousException which drove me crazy. Ultimately I threw luigi.auto_namespace(scope=name) at the top of my package and everything started working, but I don't know why. Roughly described, I had 3 tasks:TaskA - required nothing
provided a txt file with pathsTaskB - requires only input parameters p1 and p2
provides a .csv fileTaskC - requires output from task A
yields one TaskB for each path pair from output A
is completed when all yielded TaskBs are completed.If anyone can sketch how i should have done this correctly, instead of the hacked together nonsense I have now, I'd be so very grateful
|
How to properly parallelize similar tasks in Luigi
|
The current best approach (short of writing custom ops in C++/CUDA) is probably to usehttps://www.tensorflow.org/api_docs/python/tf/contrib/eager/py_func. This allows you to write any TF eager code and use Python control flow statements. With this you should be able to do most of the things you can do withnumpy. The added benefit is that you can use your GPU and the tensors you produce intfe.py_funcwill be immediately usable in your regular TF code - no copies are needed.
|
So tf.image for example has some elementary image processing methods already implemented which i'd assumed are optimized. The question is as I'm iterating through a large dataset of images what/how is the recommended way of implementing a more complex function on every image, in batches of course, (for example a a patch 2-D DCT) for it to go as best as possible with the whole tf.data framework.Thanks in advance.p.s. of course I could use the "Map" method but i'm asking beyond that. like if I'm passing a pure numpy written function to pass to "map", it wouldn't help as much.
|
How to incorporate custom functions into tf.data pipe-lining process for maximum efficiency
|
GridSearchCVhas a special naming convention for nested objects. In your caseess__rfc__n_estimatorsstands foress.rfc.n_estimators, and, according to the definition of thepipeline, it points to the propertyn_estimatorsofModelTransformer(RandomForestClassifier(n_jobs=-1, random_state=1, n_estimators=100)))Obviously,ModelTransformerinstances don't have such property.The fix is easy: in order to access underlying object ofModelTransformerone needs to usemodelfield. So, grid parameters becomeparameters = {
'ess__rfc__model__n_estimators': (100, 200),
}P.S.it's not the only problem with your code. In order to use multiple jobs in GridSearchCV, you need to make all objects you're using copy-able. This is achieved by implementing methodsget_paramsandset_params, you can borrow them fromBaseEstimatormixin.
|
Below is my pipeline and it seems that I can't pass the parameters to my models by using the ModelTransformer class, which I take it from the link (http://zacstewart.com/2014/08/05/pipelines-of-featureunions-of-pipelines.html)The error message makes sense to me, but I don't know how to fix this. Any idea how to fix this? Thanks.# define a pipeline
pipeline = Pipeline([
('vect', DictVectorizer(sparse=False)),
('scale', preprocessing.MinMaxScaler()),
('ess', FeatureUnion(n_jobs=-1,
transformer_list=[
('rfc', ModelTransformer(RandomForestClassifier(n_jobs=-1, random_state=1, n_estimators=100))),
('svc', ModelTransformer(SVC(random_state=1))),],
transformer_weights=None)),
('es', EnsembleClassifier1()),
])
# define the parameters for the pipeline
parameters = {
'ess__rfc__n_estimators': (100, 200),
}
# ModelTransformer class. It takes it from the link
(http://zacstewart.com/2014/08/05/pipelines-of-featureunions-of-pipelines.html)
class ModelTransformer(TransformerMixin):
def __init__(self, model):
self.model = model
def fit(self, *args, **kwargs):
self.model.fit(*args, **kwargs)
return self
def transform(self, X, **transform_params):
return DataFrame(self.model.predict(X))
grid_search = GridSearchCV(pipeline, parameters, n_jobs=-1, verbose=1, refit=True)Error Message:
ValueError: Invalid parameter n_estimators for estimator ModelTransformer.
|
passing arguments to featureUnion transformer_list [duplicate]
|
When you build image every time with new application you have easy way to deploy it later on to the customer or on your production server. When the docker image is ready you can keep it in the repository. Additionally you have full control on that that your docker is working with current application.In case of keeping the application in mounted volume you have to keep in mind following problems:life cycle of application - what to do with container when you have to update the application - gently stop, overwrite and run againhow do you deploy your application - you have to do it manually over SSH, or you want just to run simple command docker run, and it runs your latest version from your repositoryThe mounted volumes are rather for following casses:you want to have externally exposed settings for container - what is also not a good ideayou want to have externally access to the data produced by the application like logs, db, etcTo automate it totally, you can:build image for each application and push to the repositoryuse for examplewatchtowerto automatic update of the system on your production servers
|
We are discussing how we should deploy our application running in a docker container. At the moment, we build our application image in the pipeline containing the application code. Which means we have to build the docker image every time the application updates.Another approach we consider is putting the application code in a volume on the server. We then pull the latest release with git on the server. So the image has not to be rebuilt.So our discussed options are:Build the image containing the application codeUse a volume and store the application code on the serverWhat is best practice to do and why?
|
How to deploy an application running in docker - best practice?
|
The steps within the FeatureUnion will be applied in parallel (as you allow as many jobs as you have cores with n_jobs=-1, even actually in parallel). So yes, the CountVectorizer will be applied to the cleaned text.I think the graphics inthisblog post make it quite clear.Regarding "Is there a ways to find out?", seemy answer herefor further questions.
|
I am working on a text classifier for which I want to do the followingCreate new features on the text (like number of words, number of hash tags, etc) with a customer transformerTextCountsClean the text with a custom transformerCleanTextand applyCountVectorizeron itCombine the features of step 1 and 2 as input for my classifierI managed to create a Pipeline for this, but I am not sure whether it runs like explained above.features = FeatureUnion(n_jobs=-1,
[('textcounts', TextCounts())
, Pipeline([
('cleantext', CleanText())
, ('vect', vect)
])
])
pipeline = Pipeline([
('features', features)
, ('clf', clf)
])In fact, I am not sure whether the CountVectorizer is being applied on the cleaned text or the original text. Is there a way to figure that out? Thanks!
|
Scikit-learn Pipeline - Execution order of transformers
|
You can pass a float between 0.0 and 1.0 as thestepargument and that will remove a percentage with each step by default.Check outthe documentation here
|
I have a pipeline like so:lin_reg_pipeline = Pipeline([
('polynomial_features', PolynomialFeatures()),
('normalize_polynomial_features', StandardScaler()),
('feature_selection', RFE(LinearRegression(), verbose=1)),
('lin_reg', LinearRegression())
])Now, when fitting this pipeline in a gridsearch I specify the following parameters to tune on:params = {
'polynomial_features__degree': [1, 2, 3],
'feature_selection__n_features_to_select': st.randint(10, 100)
}Is there a way I could set n_features_to_select as a percentage of the total amount of features in the dataset? Because I don't know how many featuresPolynomialFeatures()will add.Thanks in advance,Kevin
|
Setting n_features_to_select RFE as percentage in pipeline
|
np.array(['Why', 'is', 'this', 'happening']).reshape(-1,1)is a 2D array of strings while thedocstring of the fit_transform method of the TfidfVectorizer classstates that it expects:Parameters
----------
raw_documents : iterable
an iterable which yields either str, unicode or file objectsIf you iterate over your 2D numpy array you get a sequence of 1D arrays of strings instead of strings directly:>>> list(text_array)
[array(['Why'],
dtype='<U9'), array(['is'],
dtype='<U9'), array(['this'],
dtype='<U9'), array(['happening'],
dtype='<U9')]So the fix is easy, just pass:text_documents = ['Why', 'is', 'this', 'happening']as the raw input to the vectorizer.Edit: remark:LogisticRegressionis almost always a well calibrated classifier by default. It will likely be the case thatCalibratedClassifierCVwon't bring anything in this case.
|
First of all thanks in advance, I don't really know if I should open an issue so I wanted to check if someone had faced this before.So I'm having the following problem when using aCalibratedClassifierCVfor text classification. I have an estimator which is apipelinecreated this way (simple example):# Import libraries first
import numpy as np
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.pipeline import make_pipeline
from sklearn.calibration import CalibratedClassifierCV
from sklearn.linear_model import LogisticRegression
# Now create the estimators: pipeline -> calibratedclassifier(pipeline)
pipeline = make_pipeline( TfidfVectorizer(), LogisticRegression() )
calibrated_pipeline = CalibratedClassifierCV( pipeline, cv=2 )Now we can create a simple train set to check if the classifier works:# Create text and labels arrays
text_array = np.array(['Why', 'is', 'this', 'happening'])
outputs = np.array([0,1,0,1])When I try to fit the calibrated_pipeline object, I get this error:ValueError: Found input variables with inconsistent numbers of samples: [1, 4]If you want I can copy the whole exception trace, but this should be easily reproducible. Thanks a lot in advance!EDIT: I made a mistake when creating the arrays. Fixed now (Thanks @ogrisel !) Also, calling:pipeline.fit(text_array, outputs)works properly, but doing so with the calibrated classifier fails!
|
Bug with CalibratedClassifierCV when using a Pipeline with TF-IDF?
|
You can find an sklearn wrapper for itYou can write your own inheriting from BaseEstimator and meeting all the requirements for an sklearn estimator e.g. all parameters have to be explicitly mentioned in the signature forinit.You can roll your own gridsearch just looping through the parameters.
|
I tried to use arima model in the gridSearchCV function, but it returns"TypeError: Cannot clone object '' (type ): it does not seem to be a scikit-learn estimator as it does not implement a 'get_params' methods.
"import numpy as np
import pandas as pd
from sklearn.grid_search import GridSearchCV
from statsmodels.tsa.arima_model import ARIMA
df_original = pd.DataFrame({"date_col": ['2016-08-01', '2016-08-02', '2016-08-03', '2016-08-04', '2016-08-05',
'2016-08-06', '2016-08-07', '2016-08-08', '2016-08-09', '2016-08-10',
'2016-08-11'],
'sum_base_revenue_cip': [1, 2, 7, 5, 1, 2, 5, 10, 9, 0, 1]})
df_original["sum_base_revenue_cip"] = np.log(df_original["sum_base_revenue_cip"] + 1e-6)
df_original_ts = df_original.copy(deep=True)
df_original_ts['date_col'] = pd.to_datetime(df_original['date_col'])
df_original_ts = df_original_ts.set_index('date_col')
print df_original_ts
estimator = ARIMA(df_original_ts,order=(1,1,0))
params = {
'order': ((2, 1, 0), (0, 2, 1), (1, 0, 0))
}
grid_search = GridSearchCV(estimator,
params,
n_jobs=-1,
verbose=True)
grid_search.fit(df_original_ts)
|
how can I use estimators not in sklearn for model pipeline
|
Writeback stage is for writing the result back to registers. MEM/WB buffer is there to hold any data from the previous stage. By getting rid of the writeback stage, what you'll be doing is essentially extending thememstage. For example in an instruction like,LW R1, 8(R2)contents of the memory location addressed by8(R2)will be stored in the MEM/WB buffer. By copying the contents to the buffer, MEM stage can now accept anotherLWinstruction, hence more ILP.@Craig Estey have answered correctly for this. However even if you dont't do the swapping @Craig has mentioned, you can always use control signals and flush things if IF, ID stages for the following instructions.I am not sure there is a precise answer as to when an inter stage buffer is updated. The way I see it is, at the beginning of a clock cycle, data in the inter stage buffer is not relevant and at the end of a clock cycle it is relevant. Control signals are being used to control whats is happening in each stage of the pipeline, meaning they can be used to tell IF stage not to fetch any.
|
Been learning about mips datapath and had a couple questions.Why is there a writeback stage?
-Thoughts: If it didn't add more latency or make the clock cycles longer it seems like you could move the mux in the writeback stage into the Mem stage and remove the Mem/Writeback buffer and get rid of the writeback stage entirely. Why is this not the case?Confusion about branch prediction and stalls.
-Thoughts: If an add instruction follows beq instruction into the pipline (beq in ID stage, add in fetch stage) but the branch is taken, how does the add instruction then become converted to a no-op? (What control signals are set, how?)When are the inter-stage buffers updated?
Thoughts: I think they are updated at the end of the clock cycle but have been unable to verify this. Also, I am trying to understand what exactly happens during a stall. When a stall is needed does the IF/ID inter-stage buffer get locked? If so how is this done? Does the instruction then read from the buffer to determine what instruction should be in the ID stage?Thanks for any helpHere's a picture of the pipeline:
|
MIPS Datapath Confusion
|
You needFOR Ffor capture the output of a program.set "cmd=echo list disk | diskpart | find /C "Disk""
setlocal EnableDelayedExpansion
FOR /F "usebackq delims=" %%A in (`!cmd!`) do (
set var=%%A
)
echo !var!
|
Trying to get the number of disks in a system using diskpart without using a temp file. This works on the command line:echo list disk | diskpart | find /C "Disk"but I can't figure out how to redirect the result into a batch var. Of course the number printed by the above pipeline is higher due to labels, but they are constant (divide result by 3 for actual number of disks).Any ideas?I've tried:set /A disks=<echo list disk...
set /A disks<echo list disk...
set /A disks= (echo list disk ...)
|
How can the results of a pipeline be captured in a batch var without temp file?
|
Surprisingly - the order of the sources in the pipeline does matter - after slight modification of the pipeline and placing the source with "larger" frame on the first place I was able to get the result as expected:gst-launch-1.0 -ev \
filesrc name="src1" location=$FILE1 \
! decodebin name="decodebin1" ! queue ! videoscale ! capsfilter caps="video/x-raw,framerate=15/1" ! videoconvert ! videomixer.sink_1 decodebin1. ! queue ! audioconvert name="ac1" \
filesrc name="src0" location=$FILE0 \
! decodebin name="decodebin0" ! queue ! videoscale ! capsfilter caps="video/x-raw,width=120,framerate=15/1" ! videoconvert ! videomixer.sink_0 decodebin0. ! queue ! audioconvert name="ac0"\
ac0. ! audiomixer.sink_0 \
ac1. ! audiomixer.sink_1 \
videomixer name="videomixer" ! autovideosink \
audiomixer name="audiomixer" ! autoaudiosink \
|
I need to compose a pipeline for "picture-in-picture" effect to combine media from two files:1) video content from the first file is showed on the full window
2) video from the second file is resized and is showed in the top-left corner of a window,
3) audio from both files mixed
4) the content from both files should be played simultaneouslySo far I got the following pipeline:gst-launch-1.0 -e \
filesrc name="src0" location=$FILE0 \
! decodebin name="decodebin0" ! queue ! videoscale ! capsfilter caps="video/x-raw,width=120" ! videoconvert ! videomixer.sink_0 decodebin0. ! queue ! audioconvert ! audiomixer.sink_0 \
filesrc name="src1" location=$FILE1 \
! decodebin name="decodebin1" ! queue ! videoscale ! capsfilter caps="video/x-raw" ! videoconvert ! videomixer.sink_1 decodebin1. ! queue ! audioconvert ! audiomixer.sink_1 \
videomixer name="videomixer" ! autovideosink \
audiomixer name="audiomixer" ! autoaudiosinkHowever, it plays streams one by one, not in parallel. Does anyone know what should be changed here in order to play streams simultaneously ?Ps: attaching the diagram of this pipeline visualized:
|
Gstreamer picture-in-picture - two files playing in parallel
|
The key here is to keep well in mind that this "special logic" is only an optimization: it makes things faster, here bypassing something so to avoid a stall, but it must still insure that the result is unchanged. Otherwise it would be impossible or at least to difficult to program with this hardware.So, to answer your question, you will see either case (b) or (c) but never case (a).
|
I am currently implementing a MIPS R3051 in software as part of my university project.I notice in the programmers manual from IDT it specifies that computational instructions can access the results of other computational instructions ahead of them in the pipeline at their RD stage, even though the ahead instruction has not yet committed its results to the relevant register in the WB stage. This is done via "special logic within the execution engine" to prevent a stall being necessary.My query is does this also apply to non-computational instructions (like a jump-type instruction for example)?An example: if an ADD instruction calculates a value at its ALU stage destined for r1, with a JR [r1] instruction behind it in the pipeline at RD, will the JR instruction get:(a) the old contents of r1
or(b) will this "special logic" allow the new value of r1 to be forwarded to it? or(c) will the pipeline stall until r1 has been committed properly at WB?Apologies if this is asked elsewhere (I have not spotted it). Many thanks.Regards,
Phil
|
Query about MIPS R3051 pipeline behaviour (MIPS-I architecture)
|
Dynamic refresh the input file is NOT possible (at least withfilesrc).Besides, your sample usefreeze, which will prevent the image change.One possible method is usingmultifilesrcandvideorateinstead.multifilesrccan read many files (with a provided pattern similar to scanf/printf), andvideoratecan control the speed.For example, you create 100 images with format image0000.jpg, image0001.jpg, ..., image0100.jpg. Then play them continuously, with each image in 1 second:gst-launch multifilesrc location=~/image%04d.jpg start-index=0 stop-index=100 loop=true caps="image/jpeg,framerate=\(fraction\)1/1" ! jpegdec ! ffmpegcolorspace ! videorate ! v4l2sink device=/dev/video2Changing the number of image atstop-index=100, and change speed atcaps="image/jpeg,framerate=\(fraction\)1/1"For more information about these elements, refer to their documents at gstreamer.freedesktop.org/documentation/plugins.htmlEDIT: Look like you use GStreamer 0.10, not 1.xIn this case, please refer to old documentsmultifilesrcandvideorate
|
I'm trying to use a jpg-File as a virtual webcam for Skype (or similar). The image file is reloading every few seconds and the Pipeline should also transmit always the newest image.
I started creating a Pipeline like thisgst-launch filesrc location=~/image.jpg ! jpegdec ! ffmpegcolorspace ! freeze ! v4l2sink device=/dev/video2but it only streams the first image and ignores the newer versions of the image file. I read something about concat and dynamically changing the Pipeline but I couldn't get this working for me.Could you give me any hints on how to get this working?
|
Creating a virtual webcam from jpeg using GStreamer
|
Short answer: no. Details below.In weka, there's KnowledgeFlow, but this is a GUI element (weka.gui.knowledgeflow).What you can use instead is theFilteredClassifierwhich is a classifier that works on filtered data. If you want to use several filters before the classifier, you can use theMultiFilterinstead of a filter.If you want more flexibility, you can wrap FilteredClassifier. You can create a fieldList<Object> filtersand then apply these filters before applying the classifier (buildClassifier, classifyInstance) depending on which types of filters they are, for example AttributeSelection or Filter.
|
I have used sklearn'sPipeLine and FeatureUnionin some of my projects and find it extremely useful. I was wondering if there is any WEKA equivalent for it.
Thanks.
|
Weka equivalent of sklearn's pipelines and feature-unions
|
The only missing thing was the -F.awk -F "," 'NR < 10' my_file.csv | awk -F "," '{ print $3 }'
|
I have a CSV file which has 4 columns. I want to first:print the first 10 items of each columnonly print the items in the third columnMy method is to pipe the first awk command into another but i didnt get exactly what i wanted:awk 'NR < 10' my_file.csv | awk '{ print $3 }'
|
awk combine 2 commands for csv file formatting
|
You can update the execution time inside theitem_scrapedsignal:Sent when an item has been scraped, after it has passed all the Item Pipeline stages (without being dropped).This way, when the last item would pass the "pipeline" stage - you'll catch it and measure your total execution time.(this is not tested)
|
I am a relative noob regarding Scrapy. I am trying to implement a feature that tracks how long it takes for a Scrapy spider from the crawl command until all insets/updates are done.I've written anextensionthat uses theengine_startedandengine_stoppedsignals. This is working fine except for the fact that theengine_stoppedsignal is fired half way during the inserts/updates in the pipelines.So my question is: Is there any way to check that all pipelines are empty and scrapy is completely finished with crawling and inserting/updating?Note: I am usingtwisted.enterprise.adbapiin my pipeline, my gut feeling is that that might be the reason why theengine_stoppedsignal is fired early.
|
Scrapy engine_stopped signal fired before all items are through pipelines?
|
Let's say you want to have a build-deploy-test scenario using the folowing:Unit-testsAcceptance testsCode coverage and static analysisDeployment to integration environmentFirst of all, You need to have one job for each case.
For example your create a job that runs JUnit tests, a job that runs selenium tests for AT, a job that runs Sonar code checks for static analysis and checkmarx for security checks and finaly a job that deploys your app to tomcat.Then, you need your jobs to run one after the other so what you do is that you set the 2nd job in the post build of the 1st, the 3rd in the post build of the 2nd and so on...Finally, select the initial job (the unit test job in this case) in your pipeline view to get the pipeline display.
|
My goal: set up a Jenkins-server capable of pulling down our github repos and run through the build-deploy-test scenarios.So I have set up a Jenkins-server. But I don't understand how I have to run through the build-deploy-test scenarios of my project.My project contains 1 repository which I putted into a job. I have installed the Build Pipeline Plugin. Will this be enough? It's difficult for me to understand the set-up. How do I have to start?
|
Jenkins CI configuration: Build/test/deploy scenarios
|
Ithinkyou are asking how you can transform multiple files when there are dependencies between the files, and possibly parallelise. The problem of resolving the dependencies is called atopological sort. Fortunately, themakeutility will handle all of this for you, and you can use the-jflag to parallelise, which is easier than doing this yourself. By default it will only regenerate files if the input files change, but this is easy enough to get around by ensuring all outputs and intermediate files of each batch are removed / not present prior to invocation.
|
I am trying to determine the best way to build a sort of pipeline system with many interdependent files that will be put through it, and I am wondering if anyone has specific recommendations regarding tools or approaches. We work mostly in Python and Linux.We get files of experimental data that are delivered to "inbox" directories on an HPC cluster, and these must be processed in several linear, consecutive steps. The issue is that sometimes there are multiple samples that must be processed at some stages of the pipeline as a group, so e.g. samples can independently be put through steps A and B, but all samples in the group must have completed this process to proceed through step C (which requires all of the samples together).It strikes me as a kind of functional problem, in that each step is kind of a modular piece and I will mostly only be checking for the existence of the output: if I have Sample 1 Step B output, I need Sample 2 Step B output so that I can then get Sample 1+2 C output.
I don't know a great deal about Puppet but I wonder if this kind of tool might be something I could use for this -- something that handles dependencies and deals with monitoring states? Any ideas?Thanks,Mario
|
How to deal with processing interdependent files in a pipeline
|
finding nothing to replace is not an error, it just returns the original back. so check for it in an if statement, like:foreach { $rep=$_ -replace $replace,$width; if ($_ -eq $rep) {...} }
|
I need to add error checking on the ForEach-Object part.Currently, this code works to replace a value in a file. However, if it can't find the value, it doesn't seem to generate any error. I have done a try and catch but it just isn't working. I've searched for hours and tried all kinds of stuff ... any help?I can do the command two different ways, not sure what is better or easier to add error checking on...(Get-Content $file) | ForEach-Object { $_ -replace $replace, $with } | Set-Content $file-OR-ForEach-Object { (Get-Content $file) -replace $replace, $with } | Set-Content $fileThank you.
|
Powershell - error checking for find and replace (ForEach-Object)
|
The server script spawned a child process that never terminated because it started up a daemon. tee was waiting for the child process to terminate, resulting in this issue.More details here:Why does tee wait for all subshells to finish?
|
I have a script for following a log file to determine if a server has started, and it goes something like this:echo "Starting server."
./start_server.sh
sleeptime=0
while [ ${sleeptime} -lt 60 ]
do
sleep 5
serverlog=$(tail -n 5 ./server.log)
echo ${serverlog} | grep -iq "Server startup"
if [ $? = 0 ]
then
echo "Server startup successful"
exit 0
fi
let sleeptime=sleeptime+5
done
echo "Warning: server startup status unknown."
exit 1When I run the script (./start_server.sh), the script exits fine. However, when I pipeline it to tee (./start_server.sh | tee -a serverstartup.log), the script doesn't end unless I force it (ctrl + C).Why doesn't the script exit when pipelined to tee?
|
Bash script fails to exit when pipelined to tee
|
This is because there are some problems with turbolinks and foundation.If you disable turbolinks, taking out from your application.js manifest//= require turbolinksIt should be Ok for you.If you do this you can also remove turbolinks from your gemfile.I haven't still found a solution to get this done using turbolinks.Hope it helps.
|
I've recently deployed a Rails 4 site to Heroku and I've noticed a problem with my menu bar. I'm using Foundation so that, on small screens, the menu items on the navbar become a clickable drop-down menu. I'm also using a helper to get a gravatar, shamelessly copied from Michael Hartl's excellent Rails Tutorial book. I've noticed though, that on pages that have this gravatar the clickable menu doesn't initially work. If I refresh the page it does though. In my development environment it seems to work without refreshing the page as well, so I think that this is due to a pipeline problem. Has anyone else experienced something similar?
|
Issue with Heroku pipeline and javascript
|
RE:EDIT Maybe I can solve this problem another way, but I am standing in front of another problem. How to call my own function with parameter from pipeline?
EX: find -type d | MyFunctionThe following all work:$ cat ./blah.sh
#!/bin/bash
function blah {
while read i; do
echo $i
done
}
find ~/opt -type d | blah
blah <<< $(find ~/opt -type d)
blah < <(find ~/opt -type d)
$ ./blah.sh
/home/me/opt
/home/me/opt/bin
/home/me/opt /home/me/opt/bin
/home/me/opt
/home/me/opt/binSo I'd imagine iffind -type d | MyFunctiondoesn't work, then the function is probably not looking for input on stdin.
|
I would like to save output of first command as variable in front of pipeline and send them to the pipe, too.For example:find -type d | grep -E '^\./y'. And in my variable going to be output offind -type d.Thanks for helpEDIT
Maybe I can solve this problem another way, but I am standing in front of another problem. How to call my own function with parameter from pipeline?EX:find -type d | MyFunction
|
How to make variable infront of pipeline in bash?
|
-1You can play with theFlowInterruptedException, for example, my not graceful, but working solution:import org.jenkinsci.plugins.workflow.steps.FlowInterruptedException
node(){
stage("doing things"){
sendEmailflag = true
try{
echo "into try block"
sleep 10
}
catch(org.jenkinsci.plugins.workflow.steps.FlowInterruptedException e) {
sendEmailflag = false
echo "!!!caused error $e"
throw e
}
finally{
if(sendEmailflag == false)
{echo "do not send e-mail"}
else
{echo "send e-mail"}
}
}
}this is based on:Abort current build from pipeline in JenkinsHow to catch manual UI cancel of job in JenkinsfileCatching mulitple errors in Jenkins workflow
|
I have a Jenkins pipeline using Groovy to send the status of the builds via emails when it is failure or unstable. But when i cancel the build manually in Jenkins, the email still send because it take that as a failure. How can i make Jenkins not to send emails in this situation?
|
How to stop the sending email from Jenkins pipeline when cancel the job in Jenkins manually
|
You need todefine targetoutput files usingrule all.SAMPLES = ['1', '2', '3', '4']
rule all:
input:
expand("sample{sample}.R{read_no}.fq.gz.out", sample=SAMPLES, read_no=['1', '2'])
rule fastp:
input:
reads1="sample{sample}.R1.fq.gz",
reads2="sample{sample}.R2.fq.gz"
output:
reads1out="sample{sample}.R1.fq.gz.out",
reads2out="sample{sample}.R2.fq.gz.out"
shell:
"fastp -i {input.reads1} -I {input.reads2} -o {output.reads1out} -O {output.reads2out}"
|
I'm starting to write a pipeline for my bioinformatics project and I'm using the Snakemake as workflow.
I made all the tutorial of the official site and I some of the documentation.I want to run a single shell command, like that:fastp -iinput-1-Iinput-2-ooutput-1-Ooutput-2My code in Snakefile:SAMPLES = ['1', '2', '3', '4']
rule fastp:
input:
reads1=expand("sample{sample}.R1.fq.gz", sample=SAMPLES),
reads2=expand("sample{sample}.R2.fq.gz", sample=SAMPLES)
output:
reads1out=expand("sample{sample}.R1.fq.gz.out", sample=SAMPLES),
reads2out=expand("sample{sample}.R2.fq.gz.out", sample=SAMPLES)
shell:
"fastp -i {input.reads1} -I {input.reads2} -o {output.reads1out} -O {output.reads2out}"But the program run this single line of code:fastp -isample1.R1.fq.gz sample2.R1.fq.gz sample3.R1.fq.gz sample4.R1.fq.gz-Isample1.R2.fq.gz sample2.R2.fq.gz sample3.R2.fq.gz sample4.R2.fq.gz-osample1.R1.fq.gz.out sample2.R1.fq.gz.out sample3.R1.fq.gz.out sample4.R1.fq.gz.out-Osample1.R2.fq.gz.out sample2.R2.fq.gz.out sample3.R2.fq.gz.out sample4.R2.fq.gz.outHow can I write the program to do a different shell command for each sample? I tried afor i in SAMPLES:after therule fastp:but doesn't worked and I don't know what I can try now. Sorry if this topic is too basic in some way, but I'm a noob in Python.Thank you.
|
Snakemake using multi inputs
|
-1You can also write a bash script and run the script in the cwl. I mean:basecommand: sh
inputfile: script.shthe script could contain all of your commands such as cat and wc. The script also could get other inputs for your commands such as file or strings too and you can use them inside the script by $1 and $2 and go on in which $1 relate to the first argument.
|
I'm new to CWL tools. I can use any of bash commands in basecommand, i.e.:basecommand catorbasecommand [wc, -w]How should I modify it to make it do the same ascat | wc -wwill do?
|
How to put two bash commands in CWL file?
|
It's not waiting for you to press enter before it finishes. The command finishes immediately, but its output is printed after the next shell prompt. it looks like:barmar@dev:~$ echo aaa > >(echo $(cat))
barmar@dev:~$ aaaThis is because the shell doesn't wait for the command in the process substitution to finish, it just waits for theechocommand to complete. So it prints the next prompt as soon as theechois done, and thenecho $(cat)runs asynchronously and prints its output.If you type another command at this point, it will work. It just looks weird because the output was printed after the prompt. You only need to pressEnterif you want a new prompt on its own line.
|
echo aaa > >(echo $(cat))In my opinion, it should print "aaa" and exit, but it stops unless I press enter button, why?Thx
|
Why echo not exit
|
You can use a lambda to give the incoming value a name:seq { 1..100 }
|> Seq.sum
|> (fun x -> pown x 2)Above, I named itx.
|
I'm trying to do something likeseq { 1..100 }
|> Seq.sum
|> pown 2It doesn't even compile cause pown expects 'T^' argument as first argument and i'm giving it as a second one, as this is default behavior of pipeline. By googling i didnt find the way to make "pown" use the param carried by pipeline as it's first arg. Maybe it has some default name?
|
How to address f# pipeline parameter by name?
|
PostAsynchas non-curried parameters, which cannot be passed in one by one, they have to be passed all at once. This is why you should always have your parameters curried.But alas, you can't control the definition ofPostAsync, because it's a .NET library method, so you have to wrap it one way or another. There are a few options:Option 1: use a lambda expression:|> fun body -> httpClient.PostAsync( "/DoSomething", body )Option 2: declare yourself a function with curried parameterslet postAsync client url body =
client.PostAsync(url, body)
...
|> postAsync httpClient "/DoSomething"This is usually my preferred option: I always wrap .NET APIs in F# form before using them. This is better, because the same wrapper can transform more than just parameters, but also other things, such as error handling or, in your case, async models:let postAsync client url body =
client.PostAsync(url, body)
|> Async.AwaitTaskOption 3: go super general and make yourself a function for transforming any functions from non-curried to curried. In other functional languages such function is usually calleduncurry:let uncurry f a b = f (a, b)
...
|> uncurry httpClient.PostAsync "/DoSomething"One problem with this is that it only works for two parameters. If you have a non-curried function with three parameters, you'd have to create a separateuncurry3function for it, and so on.
|
what's the correct way to pass a value to the second paramater of a function in a pipeline?e.g.async {
let! response =
someData
|> JsonConvert.SerializeObject
|> fun x -> new StringContent(x)
|> httpClient.PostAsync "/DoSomething"
|> Async.AwaitTask
}in the above codePostAsynctakes 2 parameters, the url to post to and the content you want to post. Have tried the back pipe too, and parenthesis, but cant quite figure out how to do itany ideas?
|
Pass value to second parameter in a pipeline
|
Thermcommand doesn't take filenames from standard input. If you want to pipe fromsedtorm, you can usexargs. For example:find /home/mba/Desktop/ -type d -name "logs" | sed 's/$/\/\*/' | xargs rm -rf
|
I want to pipeline the below commands but the last cmd "rm -rf"is not working i.e. Nothing deleted :find /home/mba/Desktop/ -type d -name "logs" | sed 's/$/\/\*/' | rm -rfNo error is returned.
|
Why won't rm remove files passed in from find or sed?
|
Your translation looks good to me. The use of extension method in C# (such asfoo.Select(...)) is roughly equivalent to the use of pipeline and F# function fromList,SeqorArraymodules, depending on which collection type you're using (e.g.foo |> Seq.map (...)).It is perfectly fine to use LINQ extension methods from F# and mix them with F# constructs, but I would only do that when there is no corresponding F# function, so I would probably avoidToArray()andToList()in the sample and write:open System
let Parse(csvData:string) =
// You can pass the result of `Split` (an array) directly to `Seq.map`
csvData.Split(Environment.NewLine.ToCharArray(), StringSplitOptions.None)
// If you do not want to get sequence of arrays (but a sequence of F# lists)
// you can convert the result using `Seq.toList`, but I think working with
// arrays will be actually easier when processing CSV data
|> Seq.map(fun x -> x.Split(',') |> Seq.toList)
|
I'm starting to learn F#. I'm finding quite dificult to change my mind from OO programing. I would like to know how would a F# developer write the following:"Traditional" C#public List<List<string>> Parse(string csvData){
var matrix = new List<List<string>>();
foreach(var line in csvData.Split(Environment.NewLine.ToArray(), StringSplitOptions.None)){
var currentLine = new List<string>();
foreach(var cell in line.Split(','){
currentLine.Add(cell);
}
matrix.Add(currentLine);
}
return matrix;
}"Functional" C#public List<List<string>> Parse(string csvData){
return csvData.Split(Environment.NewLine.ToArray(), StringSplitOptions.None).Select(x => x.Split(',').ToList()).ToList();
}The Question is:Would the code below considered right?F#let Parse(csvData:string) =
csvData.Split(Environment.NewLine.ToArray(), StringSplitOptions.None).ToList()
|> Seq.map(fun x -> x.Split(',').ToList())
|
F# pipelines and LINQ
|
Not everything needs to be optimized for peak performance. For mobile platforms energy efficiency is just as important. Out-of-order execution needs a lot of additional hardware, so it increases processor silicon size and decreases energy efficiency even though it improves single thread performance.Cortex-A53 is deliberately designed to be small and energy efficient, and can be used alongside a larger out-of-order core such as Cortex-A75 if higher performance is needed. Used together this is part of the Arm "big.LITTLE" heterogeneous SMP architecture. Mixing high efficiency "LITTLE" cores and high performance "big" cores and then allowing the operating system to load-balance across the two means that you get better energy efficiency for light workloads because you don't need to power up the high performance cores unless you are running an intensive workload.
|
I was searching about the ARM Cortex-A53 processor and i found out it uses a static in-order pipeline, in that instructions issue, execute, and commit in order. I can't understand why a modern processor like this would use in-order execution as out-of-order execution is faster as it has better handling for control and data hazards.
|
Why modern processors still use in-order pipeline?
|
This is not related to the pipe operator at all:julia> [1, 2, 3] * 2
3-element Vector{Int64}:
2
4
6
julia> [1, 2, 3] ^ 2
ERROR: MethodError: no method matching ^(::Vector{Int64}, ::Int64)If you want to apply an operation on every element in a container you should usebroadcasting(see alsohttps://julialang.org/blog/2017/01/moredots/)julia> [1, 2, 3] .* 2
3-element Vector{Int64}:
2
4
6
julia> [1, 2, 3] .^ 2
3-element Vector{Int64}:
1
4
9The fact that[1, 2, 3] * 2works is because vector times scalar is a mathematical operation whereas vector raised to a scalar ([1, 2, 3] ^ 2) is not.
|
The following code works fine:f(x) = 2*x
[1, 2, 3] |> fHowever, the following code fails:g(x) = x^2
[1, 2, 3] |> g
Closest candidates are:
^(::Union{AbstractChar, AbstractString}, ::Integer) at strings/basic.jl:718
^(::Complex{var"#s79"} where var"#s79"<:AbstractFloat, ::Integer) at complex.jl:818
^(::Complex{var"#s79"} where var"#s79"<:Integer, ::Integer) at complex.jl:820
...
Stacktrace:
[1] macro expansion
@ ./none:0 [inlined]
[2] literal_pow
@ ./none:0 [inlined]
[3] g(x::Vector{Int64})
@ Main ./REPL[17]:1
[4] |>(x::Vector{Int64}, f::typeof(g))
@ Base ./operators.jl:858
[5] top-level scope
|
Julia pipe operator works with function involving multiplication but not with exponentiation
|
FeatureUnioncan do the trick:from sklearn.pipeline import FeatureUnion, Pipeline
prepare_select_pipeline = Pipeline([
('preparation', full_pipeline),
('feature_selection', TopFeatureSelector(feature_importances, k))
])
feats = FeatureUnion([('prepare_and_select', prepare_select_pipeline)])
prepare_select_and_predict_pipeline = Pipeline([('feats', feats),
('svm_reg', SVR(**rnd_search.best_params_))])You can find more information about this inA Deep Dive Into Sklearn Pipelines
|
Below is part of the code that is relevant to the question. If there is a need for full code, here is a full reproducible code that downloads data too:https://github.com/ageron/handson-ml2/blob/master/02_end_to_end_machine_learning_project.ipynbI have a pipeline:prepare_select_and_predict_pipeline = Pipeline([
('preparation', full_pipeline),
('feature_selection', TopFeatureSelector(feature_importances, k)),
('svm_reg', SVR(**rnd_search.best_params_))
])Now, I want to execute only this part from the pipeline above:('preparation', full_pipeline),
('feature_selection', TopFeatureSelector(feature_importances, k)),I triedprepare_select_and_predict_pipeline.fit(housing, housing_labels), but it executes the SVM part too.In the end I need to get the same result from the above pipeline as I execute code below:preparation_and_feature_selection_pipeline = Pipeline([
('preparation', full_pipeline),
('feature_selection', TopFeatureSelector(feature_importances, k))
])
housing_prepared_top_k_features = preparation_and_feature_selection_pipeline.fit_transform(housing)How I can do that?
|
How to execute only particular part of the scikit-learn pipeline?
|
The|>!operator you're describing is thestandard "map" patternthat can apply to just aboutany"wrapper" type, not justasync. If yourreturn f rhad beenreturn! f rthen you would have thestandard "bind" pattern, which by convention should be written as the operator>>=if you're defining an operator for it.And it is a good idea, but with one minor change. You've written it with the async value as the first parameter and the function as the second parameter, but the way you used it,fetchAsync() |>! work, requires the function to be thefirstparameter, e.g.let (|>!) f a = .... (If you look at the way Scott Wlaschin implements this in the first example I linked, he puts the function as the first parameter as well.) Also, I think most F# programmers would choose not to write this as an operator, but as a function calledAsync.map, so that its usage would look like this:let result =
fetchAsync()
|> Async.map step1
|> Async.map step2
|> Async.map step3
|
What if we define a|>!operator like so:let (|>!) a f = async {
let! r = a
return f r
}Then instead of writinglet! r = fetchAsync()
work rwe could writefetchAsync() |>! workIs this a good idea or would it generate inefficient code?
|
Is this async pipelining operator ok
|
ForEach-Objectandforeachloop are different. If you use the latter everything should work as you expect.foreach ($n in (1..10)){
$n
if(($n % 3) -eq 0)
{
"wow";
break;
}
}
"hello"The output would be1
2
3
wow
helloUPD: You can read some helpful info about the differenceshere. The main thing to point attention to is that theForeach-Objectis integrated into the pipeline, it's basically a function with theprocessblock inside. Theforeachis a statement. Naturally,breakcan behave differently.
|
I have this script:1..10|%{
$_
if(($_ % 3) -eq 0)
{
"wow"
break
}
}
"hello"I expect from 1 to 10, when meeting 3 it prints"wow", and finally "hello"
But actual output is:1
2
3
kkkSo "break" seems to break not the %{} loop in pipeline, but broke the whole program? How can I break in the loop, while keep executing later statements?Thank you.
|
Powershell: "break" seems to end whole program, not just a loop?
|
You can add newlines after the pipe, and bash will continue to see it as a single pipeline:foo | bar | baz | quxcan be written asfoo |
bar |
baz |
quxOr, use line continuations, if the look appeals more:foo \
| bar \
| baz \
| quxNewlines are acceptable after|,&&and||
|
I have plenty of bash scripts with various files being piped into various scripts and it does my head in a bit.I wondered if there was a way of visualising the pipeline in a bash script so I can easily see the flow.
|
Is there a visual bash pipeline editor
|
You should be using access rights for it. It would fit your needs.You basically need to select the item you need to limit access, then got to the Security>Assign.
Break inheritance to Everyone, then set Allow Read for a specifc role you want to allow. That will hide the items in the tree for the users which are not in the specified role.
|
Can't seem to to find this posted online anywhere - excuse me if it is!I am looking for an event/pipeline that one can override for the main content tree in the CMS. I need to hide/disable items in the tree according to user roles so they will not be able to select or view them.Thanks!
Dan
|
Sitecore Overriding Content Tree
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.