Response
stringlengths 8
2k
| Instruction
stringlengths 18
2k
| Prompt
stringlengths 14
160
|
---|---|---|
You have the wrong rules and should be updated to below:Only runs when there is a commit branch, otherwise never runs.- if: $CI_COMMIT_BRANCH
- when: never
|
My.gitlab-ci.ymllooks like this:workflow:
rules:
- if: $CI_COMMIT_BRANCH
- if: $CI_COMMIT_TAG
when: never
stages:
- tmo
- test
- version
- buildbut the pipeline still runs for tags. How do I get the desired behavior?
|
How to run pipeline only on branch and never on tag?
|
If you want to sync the tables daily, you may just look for timestamp_column > current_Date.If you want to get the max dates, you can write something like this:INSERT INTO table2 (x,y,z, timestamp_column)
SELECT x,y,z, current_timestamp() FROM table1
WHERE timestamp_column >
(SELECT IFNULL(MAX(timestamp_column), '0001-01-01' ) FROM table2);On the other hand, I think Snowflake streams are a very good fit for this task:https://docs.snowflake.com/en/user-guide/streams-intro.htmlYou can create an "Append-only" stream on table1, and use it as a source when synchronizing to table2.
|
This is more of a logic question as I am having a hard time wrapping my head around it.Say I havetable 1that is truncated and populated everyday, and a time stamp column is added onto it. Everyday new records would be added to the table.Thattable 1is copied totable 2initially, however on consequent runs I only want to add the new records fromtable 1intotable 2.I know this will be a mixture of matching the columns and only importing the MAX DATES, however confused as to the actual logic of the query.So in short I want to append only the latest rows from table 1 to table 2 based on the max date.
|
SQL for append rows based on max date
|
From your post, the json appears to contain an array as the result of runOutput.Can you try the following?@activity('notename').output.runOutput[0].last_up
|
How to set pipeline variable getting databrick return value?Databricks/Notebook exited json:"runOutput":
[
"{\"last_up\":\"2022-04-04 15:14:20\"}"
]What I tried to get the date value@activity('notename').output.runOutput.last_upWhat should I try?
|
Output Databricks json value to ADF
|
When you create aSecretfileyou will be given a Path to a temporary file with the secret content. Secret files are intended to be passed as a whole file. For example, you can have yourkubeconfigsfile as aSecretfileand then pass it directly tokubectllikekubectl --kubeconfig $SECRET_CONFIG.If you want to export each line in the secret file as a variable, it's doable. But when you start using them in yourshellsteps their values will be exposed in the logs. In order to use them, you can use something like below.sh """
source $SECRETS_FILE
echo "\$subscription"
echo "\$tenant"
"""
|
For my pipeline, I want to keep the subscription ID, tenant ID, client ID, and password all out of source control, and keep all of those in a single credential store in Jenkins. It seems that there is not a $class that matches this for withCredentials, but I just want to use credentials() in the environment anyways. Am I mistaken that the credentials('file') method reads as a single value, or is there a way to format that file such that Jenkins will parse it and make each secret available?//Jenkinsfile (Declarative Pipeline)
pipeline {
agent { label 'AZcli' }
environment {
SECRETS_FILE = credentials('AZJenkinsSecretsFile')
}Then what?Let's say this is the AZJenkinsSecretsFile.txt that I've uploaded.subscription=xxxx-xxx-xx-xx-x
tenant=xxx-xxx-xxx-xx
client=xxx-xxxx-xxx
password=password
|
Get multiple secrets from one Credential Store in Jenkins
|
It seems that the problem is in the webhook service configuration, either in Jenkins or in ngork.Did you try calling the webhook service directly from curl or postman?Did you try calling the Jenkins webhook service directly to make sure it is configured correctly?You may try configuring Artifactory to call Jenkins directly by setting urlStrictPolicy to false in system.yaml.
You can read more about configuring Artifactory webhook in the following article:https://jfrog.com/knowledge-base/artifactory-how-to-test-webhooks-in-artifactory-and-check-its-request-payload/
|
So I have a self hosted Artifactory repository which I wanted to use with a Jenkins pipeline, I deployed Jenkins with ngrok in order to have a fake domain.When I created the webhook inside Artifactory I used the following URL:https:///generic-webhook-trigger/invoke?token=123**I tried testing it inside Artifactory and keep getting the error alert"Sending a dummy Webhook failed"and of course, the pipeline not getting triggeredThis is the output inside ngrok command, looks like the webhook is getting triggered but ngrok is showing a 404 errorI've been stuck for days and I hope someone here can help me with this.
|
Artifactory webhook gives "Sending a dummy Webhook failed"
|
Why can't I choose an Azure Table Storage? Do I have to add an intermittent Parquet layer to store data between file to table?Azure table storage isnot supportedin sink. Please find the belowsupported sink typesas given in the documentation accordingly: -https://learn.microsoft.com/en-us/azure/data-factory/data-flow-sinkYou can use copy data tool to transform data from azure blob storage to Table storage.Kindly refer the below documentation link given forcopying and transforming the data in Azure Blob storage by using Azure Data Factory or Azure Synapse Analyticshttps://learn.microsoft.com/en-us/azure/data-factory/connector-azure-blob-storage?tabs=data-factory
|
I have a large 700 GB CSV file in an Azure Blob Container.
I am using Azure Synapse to transform column names, and some data and sink it in a table.
I am unable to sink it to the Table Storage in another Azure Data Lake Storage account.Why can't I choose an Azure Table Storage ? Do I have to add an intermittent Parquet layer to store data between file to table ? Please assist.
|
Azure Blob Container CSV to Azure Table Storage using Synapse
|
sklearn supports notation liken_components = 0.95in fact.
|
I am using astandard scaler,PCAandRandom Forestto classify some data. I wanted to use thepipelinemethodology, however, I do not know how to let thepipelineknow that I want then_components= 95% explained variance. How can I set up the code to calculate this number in thepipelineenvironment.Here is the code:from sklearn.pipeline import Pipeline
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
from sklearn.ensemble import RandomForestClassifier
pipe = Pipeline([('scaler', StandardScaler()),
# ('pca', PCA(n_components=n_to_reach_95)),
('pca', PCA(n_components=15)),
('clf', RandomForestClassifier())])
# Declare a hyperparameter grid
parameter_space = {
'clf__n_estimators': [10,50,100],
'clf__criterion': ['gini', 'entropy'],
'clf__max_depth': np.linspace(10,50,11),
}
clf = GridSearchCV(pipe, parameter_space, cv = 5, scoring = "accuracy", verbose = True) # model
pipe.fit(X_train,y_train)
|
Establish a 95% explained variance while carrying out a pipline process
|
You want the Software Optimization Guide for the core you are using, e.g.Cortex A-72 Software Optimization Guide. Page 6 has a diagram similar to the one in your question.Note that lower-level details are likely to be proprietary, so if you need more information than what's in the guide, you will probably have to reverse engineer it by experimentation, or find someone else who has done so. For cores designed by vendors other than Arm (e.g. Apple), even the basic pipeline structure may be proprietary.
|
To optimize arm's assembly code, I found several documents such as arm's reference manual, technical manual, etc.I expected to find simple pipeline-diagram like below.However, through the manual, I counldn't find any diagrams nor description about the pipeline.When I google, I can find some informations, but it is not enough, or not latest version.Also, for the reliability, I want to get correct information from official document.Is there any document that has correct & latest information of arm's pipeline architecture?
|
what is best way to figure out the arm pipeline architecture?
|
It's the other way around.reduce(L,f) = fold(first(L), rest(L), f), so there's no special need forreduce-- it's just a short form for a commonfoldpattern.foldhas lots of use cases of its own, though.The example you gave for string concatenation is one of them -- you can fold items into a special string accumulator much more efficiently than you can build strings by incremental accumulation. (exactly how depends on the language, but it's true pretty much everywhere).Applying a list of incremental changes to a target object is a pretty common pattern. Adding files to a folder, drawing shapes on a canvas, turning a list into a set, crossing off completed items in a to-do list, etc., are all examples of this pattern.Alsomap(L,f) = fold(newMap(), L, (M,v) -> add(M,f(v)), somapis also just a commonfoldpattern. Similarly,filter(L,f) = fold(newList(), L, (L,v) -> f(v) ? add(L,v) : L).
|
When accumulating a collection (just collection, not list) of values into a single value, there are two options.reduce(). Which takes aList<T>, and a function(T, T) -> T, and applies that function iteratively until the whole list is reduced into a single value.fold(). Which takes aList<T>, an initial valueV, and a function(V, T) -> V, and applies that function iteratively until the whole list is folded into a single value.I know that both of them have their own use cases. For eg,reduce()can be used to find maximum value in a list andfold()can be used to find sum of all values in a list.But, in that example, instead of usingfold(), you canadd(0), and thenreduce(). Another use case of fold is to join all elements into a string. But this can also be done without using fold, bymap |> toString()followed byreduce().Just out of curiosity, the question is, can every use case offold()be avoided given functionsmap(),filter(),reduce()andadd()? (alsoremove()if required.)
|
Is there a particular use case for fold() function
|
ffmpeg can run as a jack or pulse client, so the way forward is to treat pipewire as it if were one of those two things.https://ffmpeg.org/ffmpeg-devices.html#jack
|
I want to take audio from pipewire and send it to the stdin of ffmpeg via it's use of command line pipes. Is there a way to get audio from pipewire to something like a cat command?
|
Pipe pipewire to ffmpeg stdin
|
class A:
def process(self, input_df):
return input_df
class B:
def process(self, input_df):
a = A()
df = a.process(input_df)
df.to_csv('path.csv')
b = B()
b.process(some_df)If I correctly get your question this is what you can do, if not please update the question and use code formatting to make it human readable.
|
I want to import a dataframe from Class A to Class B. How can I do it?class A:
def process(self, input):
return df
class B:
def process(self, input):
A = A(input_uri)
df = A.process(A, input)
df.to_csv('path')But I keep getting cannot the 'generator' object [while running '[7]: To CSV'].
|
How to import dataframe from another class?
|
As far as I know, there is no way to change that behavior for all the future parallel stages.However one can change it for any given set of parallel stages, like this:def parallel_stages = [:].asSynchronized()
parallel_stages['one'] = {
stage ('One') {
script {
println "One"
}
}
}
parallel_stages['two'] = {
stage ('Two') {
script {
println "Two"
}
}
}
// Here you set this for the given parallel stage
parallel_stages.failFast = true
parallel parallel_stages
|
How to use parallelsAlwaysFailFast() in Jenkins Scripted Pipeline?
I could not find any example for this.Edited, here is the code I use and the 'Blue Ocean' screenshot:stage("Build") {
parallel([
failFast: true,
"Stage 1":{
stage("Stage 1") {
stage("a1") {
println("a1")
};
stage("a2") {
println("a2")
}
}
},
"Stage 2":{
stage("Stage 2") {
stage("b1") {
sh '''pwd'''
};
stage("b2") {
echo '''Here we can see the InterruptedException'''
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
error "Failing the stage"
}
}
}
}
])
}Blue Ocean imgHow can I make all the stages that are executed in parallel to fail as well?Thanks.
|
How to use parallelsAlwaysFailFast() in scripted pipeline?
|
You cannot filter out collections via collection name from the piepline itself. According to mongo'smanual,pipelineis used to "Specify a pipeline to filter/modify the change eventsoutput". If you notice within yourchange events, there is annsproperty that gives the namespace of the change. You can use your pipeline to exclude matches of this ns property:const pipeline = [
{
$match: {
$and: [
{
ns: {
$ne: {
db: "myDatabase",
coll: "notifications",
},
},
},
{
ns: {
$ne: {
db: "myDatabase",
coll: "rules",
},
},
},
],
},
},
];
const db = client.db("myDatabase");
const changeStream = db.watch(pipeline);
|
I am using MongoDB's changeStreams to watch for changes in my database. I'd like to watch every collection for changes except two. Something like this:const pipeline = [{ $match: { name: { $ne: "excludedCollection1" } } },
{ $match: { name: { $ne: "excludedCollection2" } } }];
const db = client.db("myDatabase");
const changeStream = db.watch(pipeline);However, this code does not exclude the two collections.
|
Aggregate Pipeline to exclude collections from db.watch() MongoDB
|
According to[HuggingFace]: Pipelines - class transformers.TokenClassificationPipeline(emphasisis mine):grouped_entities(bool,optional, defaults to False) -DEPRECATED, useaggregation_strategyinstead. Whether or not to group the tokens corresponding to the same entity together in the predictions or not.So, your line of code could be:ner = pipeline("ner", aggregation_strategy="simple", model="dbmdz/bert-large-cased-finetuned-conll03-english") # Named Entity Recognition (NER)
|
I am using the following code!pip install datasets transformers[sentencepiece]
from transformers import pipeline
ner = pipeline("ner", grouped_entities=True, model='dbmdz/bert-large-cased-finetuned-conll03-english') #Named Entity Recognition (NER)
ner("My name is <Name> and I work at <Office> in <location>.")And I got the following warning. How to improve upon this warning?UserWarning:grouped_entitiesis deprecated and will be removed in
version v5.0.0, defaulted toaggregation_strategy="AggregationStrategy.SIMPLE"instead.
|
HuggingFace Pipeline: UserWarning: `grouped_entities` is deprecated and will be removed in version v5.0.0. How to improve about this warning?
|
The "rendering pipeline" is the name for the different stages in the OpenGL renderer. Think of it like an assembly line in a factory. Each piece of data goes through multiple steps in the rendering pipeline, and may be combined with other data. When all data is processed, rendering is complete.You can't even start rendering without a complete rendering pipeline."When does OpenGL write to the framebuffer?" is an interesting question. It turns out that normally, when you issue a draw command likeglDrawElements(), this command is stored in a command buffer and processed at a later point in time. If you just callglDrawElements()by itself, you won't know when the rendering happens, but there are ways to find out.If you create afenceafter rendering, you can find out when rendering is complete by querying the fence. SeeglClientWaitSyncIf you callglFinish, it will wait untill rendering is complete. SeeglFinish.Certain commands will wait for rendering to finish as a side effect. For example,glReadPixels, when reading into client memory, will not return until previous rendering operations are complete.
|
When does OpenGL render my object to the framebuffer? Does the rendering happen after the rendering pipeline is complete or does it happen in the rendering pipeline (if so, which stage does it render in the pipeline). I recently asked a few people but I got different answers so I am unsure.
|
Does OpenGL rendering happen in the rendering pipeline or after the rendering pipeline is complete?
|
Try to replace your lookup with a lookup with pipeline like the bellow.
If you want the array to be sorted, you can do it inside that pipeline.If this isn't what you need if you can add some sample data and expected output.aggregate(
[{"$lookup":
{"from": "activities",
"localField": "activities",
"foreignField": "_id",
"pipeline": [{"$sort": {"createdAt": -1}}],
"as": "activities"}}])
|
I'm trying to sort an array (descending) that gets computed through a$lookupby a field calledcreatedAtwhich contains aDateobject:[
{
"$lookup": {
"from": "activities",
"localField": "activities",
"foreignField": "_id",
"as": "activities"
}
},
{
"$sort": {
"activities.createdAt": -1
}
},
{
"$project": {
"_id": false,
"updatedAt": false,
"activities._id": false,
"activities.game": false,
"activities.updatedAt": false,
"activities.__v": false,
"__v": false
}
}
]Theactivitiesfield is being "populated" correctly but it's not sorting the results.
|
How to sort an array of `$lookup` result by `Date` field
|
This error was caused because your IP was not added in the SQL server tab(azure portal). So you must add the IP address of your SQL Server on Azure portal
|
My app is running locally without any issue but when I try to run it from Azure console I have the following error :An error occurred while starting the application.
SqlException: Cannot open server 'Server-name' requested by the login. Client with IP address 'w.x.y.z' is not allowed to access the server. To enable access, use the Windows Azure Management Portal or runsp_set_firewall_ruleon the master database to create a firewall rule for this IP address or address range. It may take up to five minutes for this change to take effect.Microsoft.Data.ProviderBase.DbConnectionPool.CheckPoolBlockingPeriod(Exception e)SqlException: Cannot open server 'server-name' requested by the login. Client with IP address 'ip' is not allowed to access the server. To enable access, use the Windows Azure Management Portal or run sp_set_firewall_ruleI have added IP in firewall setting on azure portal still the issue is happening
|
Error loading my web app after deploying .NET Core console app to Azure
|
You can access the steps of a pipeline in a number of ways. For example,split_pipeline['splitter'].test_setThat said, I don't think this is a good approach. When you fill out the pipeline with more steps, atfittime everything will work how you want, but when predicting/transforming on other data you will still be calling yourtransformmethod, which will generate anewtrain-test split, forgetting the old one, and sending the newtrainset down the pipe for the remaining steps.
|
I am tasked with a supervised learning problem on a dataset and want to create a full Pipeline from complete beginning to end.
Starting with the train-test splitting. I wrote a custom class to implement sklearns train_test_split into the sklearn pipeline. Its fit_transform returns the training set. Later i still want to accsess the test set, so i made it an instance variable in the custom transformer class like this:self.test_set = test_setfrom sklearn.model_selection import train_test_split
class train_test_splitter([...])
[...
...]
def transform(self, X):
train_set, test_set = train_test_split(X, test_size=0.2)
self.test_set = test_set
return train_set
split_pipeline = Pipeline([
('splitter', train_test_splitter() ),
])
df_train = split_pipeline.fit_transform(df)Now i want to get the test set like this:df_test = splitter.test_setIts not working. How do I get the variables of the instance "splitter". Where does it get stored?
|
Get instance variable of costum transformer in sklearn pipeline
|
As the sequences array is in an array of documents, you need to include an intermediary$mapoperation:db.collection.aggregate([
{
"$project": {
"sentences": 1,
"documents": {
$map: {
"input": "$documents",
"as": "d",
"in": {
"sequences": {
$filter: {
"input": "$$d.sentences",
"as": "s",
"cond": {
$in: [
"$$s.uuid",
"$sentences"
],
}
}
}
}
}
}
}
}
])
|
I have a mongo pipeline. Here is the output of one of the steps.{
"_id": "6249b7a8338c31b803a1e56b",
"sentences": [
66,
61,
98,
44
],
"documents": [
{
"sentences": [
{
"uuid": 66,
"text": "cbElZuplrxPQicnBHvKQutEhZ",
"index": 58,
"cluster_id": "PyYvsHQfnypoowYsswiAJLlFP"
},
{
"uuid": 61,
"text": "cbElZuplrxPQicnBHvKQutEhZ",
"index": 58,
"cluster_id": "PyYvsHQfnypoowYsswiAJLlFP"
}
]
}
]
}In one of the steps, I need to filter the array$documents.sentencesbased on whether the$documents.sentences.uuidis in$sentences. My pipeline$projectstage is defined like this:{
"sentences": 1,
"documents.sentences": {
$filter: {
"input": "$documents.sentences",
"as": "s",
"cond": {
$in: ["$$s.uuid", "$sentences"],
}}
}
}However, this results in a completely empty$documentsMy question is, what is the best way to filter thedocuments.sentencesarray based on the condition, ifdocuments.sentences.uuidis insentences?thank you for your time
|
Mongodb (aggregate) filter nested array based on array in the document
|
You are almost there. What you seek is the pipeline version of$lookup:db.dept.aggregate([
{$lookup: {"from": "emp",
let: { did: "$_id" },
pipeline: [
{$match: {$expr: {$eq: [ "$_id", "$$did" ]} }},
{$project: {
_id:false,
email:true,
name:true
}}
],
as: "employees"
}}
]);which will yield:{
"_id" : 0,
"name" : "Chicago",
"number" : 10,
"employees" : [
{
"name" : "xyz",
"email" : "[email protected]"
}
]
}
|
Is it possible to add a pipeline to the arguments of another pipeline in mongodb?An example of the current result that I am getting:MongoPlaygroundIs it possible to project the emp table to only name and email field before adding it to the lookup pipeline as an argument?
The result that I want:[
{
"_id": ObjectId("610bce417b0c4008346547bc"),
"employees": [
{
"email": "[email protected]",
"name": "xyz"
}
],
"name": "Chicago",
"number": 10
}
]
|
Add pipeline stage in the arguments of another pipeline - MongoDB
|
The specific error you are seeing is because the variableSAMPLESisn't set to anything before you use it in expand.Some other issues you may run into:Output file is missing the{sample}wildcard.The value of threads isn't passed into bwa or samtoolsYou should place your expand into the input directive of the first rule in your snakefile, typically calledallto properly request the files from bwa_map.You aren't pairing your reads (R1 and R2) in bwa.You should look around stackoverflow or some github projects for similar rules to give you inspiration on how to do this mapping.
|
I have written a snakemake code to run bwa_map. Fastq files are with different folder name and different sample name (paired end). It shows error as 'SAMPLES' is not defined. Please help.Error:$snakemake --snakefile rnaseq.smk mapped_reads/EZ-123-B_IGO_08138_J_2_S101_R2_001.bam -np*NameError in line 2 of /Users/singhh5/Desktop/tutorial/rnaseq.smk:
name 'SAMPLES' is not definedFile "/Users/singhh5/Desktop/tutorial/rnaseq.smk", line 2, in *#SAMPLE DIRECTORY
fastq
Sample_EZ-123-B_IGO_08138_J_2
EZ-123-B_IGO_08138_J_2_S101_R1_001.fastq.gz
EZ-123-B_IGO_08138_J_2_S101_R2_001.fastq.gz
Sample_EZ-123-B_IGO_08138_J_4
EZ-124-B_IGO_08138_J_4_S29_R1_001.fastq.gz
EZ-124-B_IGO_08138_J_4_S29_R2_001.fastq.gz#My Code
expand("~/Desktop/{sample}/{rep}.fastq.gz", sample=SAMPLES)
rule bwa_map:
input:
"data/genome.fa",
"fastq/{sample}/{rep}.fastq"
conda:
"env.yaml"
output:
"mapped_reads/{rep}.bam"
threads: 8
shell:
"bwa mem {input} | samtools view -Sb -> {output}"
|
define SAMPLE for different dir name and sample name in snakemake code
|
I dig a bit on cloud sdk documentation and found that the feature was released a couple of days ago, please checkGoogle Cloud CLI - Release Notesaboutcloud datapipelines. Still, it's inbeta. You can checkgcloud beta datapipelinespage for additional details.As this feature is still inbetait may have limited support. It's not recommended to use this in production environments and it's only for testing at the moment. For now, I think we will have to wait until it's fully released.
|
We recently created a Dataflow Job and Pipeline within the Google Cloud Console.
For record-keeping purposes, I want to record thegcloudequivalent commands for both the job and pipeline.
I managed to determine thegcloudequivalent command for the Dataflow Job, but I am unable to figure out how to create thegcloudequivalent for the Dataflow Pipeline.Sample Dataflow Job Gcloud command:gcloud dataflow jobs run sample_dataflow_job --gcs-location gs://dataflow-templates-us-east1/latest/Jdbc_to_BigQuery --region us-east1 --num-workers 2 --staging-location gs://dataflow_single_region_us/writingdirectory --subnetwork https://www.googleapis.com/compute/v1/projects/sample-project/regions/us-east1/subnetworks/project_network --disable-public-ips --parameters connectionURL=jdbc:mysql://psql.gcp.sample.net:6033/sample,driverClassName=com.mysql.cj.jdbc.Driver,query=select * from datab,outputTable=bigquerytable:sample.sample_DataFlow_1,driverJars=gs://dataflow_single_region_us/jdbc_driver,bigQueryLoadingTemporaryDirectory=gs://dataflow_single_region_us/bigqueryloading,username=johnsmith,password=Password1Any ideas how I can get thegcloudcommand for the Dataflow Pipeline solution?
|
Creating a gcloud command for Dataflow Pipeline job
|
Use needs keywordhttps://docs.gitlab.com/ee/ci/yaml/#needsvm:build:
stage: build
script: echo "Building vm..."
test_1:
stage: test
script: echo "test_1"
test_2:
stage: test
script: echo "test_2"
test_3:
stage: test
script: echo "test_3"
cleanup:
stage: cleanup
needs: ["test_1", "test_2", "test_3"]
script: echo "clea"
|
I need to create a CI that erases the VMs that were created in the previous stageonlyif the previous stage succeeded.
if I use when: on_success - It works only if all stages passed.stages:prep (2 jobs)build (5 jobs)test (5 jobs)cleanupI want cleanup to work if all 5 test jobs passed even if I have a failure in a job that is in the build stage.
|
Gitlab-ci How can I trigger a cleanup job only after previous stage succeeded disregarding all other stages status
|
I have reproed in my local environment. Please see the below steps.Using lookup activity, first get all the tables list from control table.Pass the lookup output to ForEach activity.Inside ForEach activity, add lookup activity to get the variables list from control table where table name is current item from ForEach activity.@concat('select table_variables from control_tb where table_name = ''',item().table_name,'''')Convert lookup2 activity output value to an array using set variable activity.@split(activity('Lookup2').output.firstRow.table_variables,',')create another pipeline (pipeline2) with 2 parameters (table name (string) and variables (array)) and add ForEach activity in pipeline2Pass the array parameter to ForEach activity in pipeline2 and Use the copy activity to copy data from source to sinkConnect Execute pipeline activity to pipeline 1 inside ForEach activity.
|
I have a lookup config table that stores the 1) source table and 2) list of variables to process, for example:SQL Lookup Table:tableA,variableX,variableY,variableZ<-- tableA has more than these 3 variables, i.e it has other variables such as variableV, variable W but they do not need to be processedtableB,variableA,variableB<-- tableB has more than these 2 variablesHence, I will need to dynamically connect to each table and process the specific variables in each table. The processing step is to convert the julian date (in integer format) to standard date (date format). Example of SQL query:select dateadd(dd, (variableX - ((variableX/1000) * 1000)) - 1, dateadd(yy, variableX/1000, 0)) FROM [dbo].[tableA]The problem is after setting uplookupandforEachinADF, I am unsure how to loop through the variable array (or string, since SQL DB does not allow me to store array results) and convert all these variables into the standard time format.The return result should be a processed dataset to be exported to a sink.Hence would like to check what will be the best way to achieve this in ADF?Thank you!
|
Dynamic list of variables in process in Azure Data Factory
|
Good chances the root cause is a SageMaker training job that runs for more than 24Hrs, and timeouts if you didn't increase the defaultmax_run:max_run (int) – Timeout in seconds for training (default: 24 * 60 * 60). After this amount of time Amazon SageMaker terminates the job regardless of its current status.Then when you resume the job, it probably picks up training from a checkpoint and therefore able to eventually finish where the last attempt is less than a day.
|
I have a long running job that'll probably take over a day to run. It's just collecting initial data. However, after a day the job is paused automatically, I have to go in everyday to hit the resume button in the pipeline execution. How can I stop this from happening?
|
How can I stop sagemaker pipeline pausing my jobs after a day
|
You can use conditional selection:df[ (df.Flow==True) | (df.City.isin(['Frankfurt', 'Amsterdam']))]Output:City Flow
0 Berlin True
3 Munich True
5 Frankfurt True
6 Frankfurt False
7 Amsterdam True
8 Amsterdam False
|
I have a sample dataframe dfCityFlowBerlinTrueBerlinFalseBerlinFalseMunichTrueMunichFalseFrankfurtTrueFrankfurtFalseAmsterdamTrueAmsterdamFalseI want to filter dataframe df1 where Flow column is True for all cities except Frankfurt and Amsterdam such that df becomesCityFlowBerlinTrueMunichTrueFrankfurtTrueFrankfurtFalseAmsterdamTrueAmsterdamFalse
|
Filter dataframe based on another column
|
Here I also usemutatewithcase_when. Since theNAin your dataset is of character "NA" (literal string of "NA"), we cannot use function likeis.na()to idenify it. Would recommend to change it to "real"NA(by removing double quotes in your input).As I've pointed out in the comment, I'm not sure why the eighth entry is "1" when the correspondingzis not "1" or "2".library(dplyr)
df %>% mutate(v = case_when(x == "1" & y == "1" & z %in% c("1", "2") & w %in% paste0(0, seq(1:6)) ~ "1",
x == "NA" | y == "NA" | z == "NA" | w == "NA" ~ NA_character_,
T ~ "0"))
x y z w v
1 1 1 1 01 1
2 2 NA 2 02 <NA>
3 1 2 3 03 0
4 NA 1 4 04 <NA>
5 1 1 1 05 1
6 2 NA 2 06 <NA>
7 NA 2 3 07 <NA>
8 1 1 4 01 0
9 2 2 1 02 0
10 2 1 2 03 0
11 NA 1 3 04 <NA>
|
I'm triying to generate a new variable using multiple conditionals that evaluate against factor variables.So, let's say I got this factor variables data.framex<-c("1", "2", "1","NA", "1", "2", "NA", "1", "2", "2", "NA" )
y<-c("1","NA", "2", "1", "1", "NA", "2", "1", "2", "1", "1" )
z<-c("1", "2", "3", "4", "1", "2", "3", "4", "1", "2", "3")
w<- c("01", "02", "03", "04","05", "06", "07", "01", "02", "03", "04")
df<-data.frame(x,y,z,w)
df$x<-as.factor(df$x)
df$y<-as.factor(df$y)
df$z<-as.factor(df$z)
df$w<-as.factor(df$w)
str(df)So I need to get a new v colum on my dataframe which takes values between 1, 0 orNAwith the following conditionals:Takes value 1 if: x = "1", y = "1", z = "1" or "2", w = "01" to "06"Takes value 0 if it doesn't meet at least one of the conditionals.Takes valueNAif any of x, y, z, or w isNA.Had tried using a pipe%>%alongmutateandcase_whenbut have been unable to make it work.So my desired result would be a new columnvindfwhich would look like this:[1] 1 NA 0 NA 1 NA NA 0 0 0 NA
|
Intrincate variable generation with conditionals against multiple factor variables in R
|
Unfortunately the WriteToText transform can't be used for this because it currently only supports a fixed destination. So in order to write files to dynamic destinations, you would instead need to use utilities from thefileio modulewhich supports dynamic destinations. Although this does mean switching to using the experimentalWriteToFilestransform.
|
I am creating a pipeline TEMPLATE which takes some input file and counts the words on it. All works fine until this point, but the thing is that I need to pass another parameter (from the function where I call the template) that lets me pass the name of the file so I can create a path with it.I'll show you an example of what I want though I know pipelines can't access Runtime parameters during pipeline construction or outside a runtime context this can help to give you an Idea of what I need to do:class tempatableTest(PipelineOptions):
@classmethod
def _add_argparse_args(cls,parser):
parser.add_value_provider_argument(
'--input',
type=str,
help='path to the input file'
)
parser.add_value_provider_argument(
'--fdinamic',
type=str,
help='folder name'
)
templatable_test = PipelineOptions().view_as(tempatableTest)
beam_options= PipelineOptions()
input = templatable_test.input
dinamicName = templatable_test.fdinamic.get()
with beam.Pipeline(options=beam_options) as p:
lines = p | beam.io.ReadFromText(input)
len = lines | beam.combiners.Count.Globally()
len | 'countTotalLen' >> beam.io.WriteToText(f'gs://bucket-test-out/processedFile/{dinamicName}/count.txt')If I use templatable_test.fdinamic.get() I'd get the runtime error but if I remove the .get() I'd get a super long name on the folder.I know probably this isn't the way to go but is just to illustrate what I need to do, thank you for your help.
|
Dynamic paths with apache beam and Runtime parameters
|
No, luigi won't start executing TaskB until TaskA has finished (ie, until it has finished writing the target file)If you want to get a detailed response forluigi.buildin case of error, you must pass an extra keyword argument:detailed_summary=Trueto build/run methods and then access thesummary_text, this way:luigi_run_result = luigi.build(..., detailed_summary=True)
print(luigi_run_result.summary_text)For details on that, please readResponse of luigi.build()/luigi.run()in Luigi documentation.Also, you may be interested in this answer about how to access the error / exception:https://stackoverflow.com/a/33396642/3219121
|
Look at class ATaskclass ATask(luigi.Task):
config = luigi.Parameter()
def requires(self):
# Some Tasks maybe
def output(self):
return luigi.LocalTarget("A.txt")
def run(self):
with open("A.txt", "w") as f:
f.write("Complete")Now look at class BTaskclass BTask(luigi.Task):
config = luigi.Parameter()
def requires(self):
return ATask(config = self.config)
def output(self):
return luigi.LocalTarget("B.txt")
def run(self):
with open("B.txt", "w") as f:
f.write("Complete")Question is there is a chance that while TaskA running and start write "A.txt" taskB will start before taskA finishing writing?The second is that if I start execution likeluigi.build([BTask(config=some_config)], local_scheduler=True )And if this pipilene fail inside - Could I somehow to know outside about this like return value of luigi.build or smth else?
|
understanding of some Luigi issues
|
"mocha: command not found" means you have to install mocha in your gitlab runner environment.test:
stage: test
script:
- npm install --global mocha
- mocha test
|
I want to try CI/CD. So I am working on a simple project. I wanted to run the test file. But I get the error "mocha: command not found". There is no problem when I try it in my own terminal. How can I solve this?Thanks.
|
mocha: command not found in GitLab
|
By default, artifacts are downloaded to the target path in the file system while maintaining their hierarchy in the source repository (not including the repository name - hencep1is missing in your example).To download an artifact while ignoring the hierarchy, set"flat": "true"in your file spec.For a more advanced control of the resulting hierarchy, you may want to usePlaceholders.See more information in theFile Specs documentation.
|
This snippedstage('get iter number') {
steps {
rtDownload ( //
serverId: 'MAIN-ARTIFACTORY',
spec: '''{ "files": [{"pattern": "p1/p2/p3/${BUILD_ID}/n_iter.txt", "target": "./n_iter.txt"}] }''',
)
}
}where BUILD_ID = 'a/b'
downloads file to a location $WORKSPACE/p2/p3/a/b/n_iter.txt rather then expected $WORKSPACE/n_iter.txtAlso, very strange - why p1 is not in downloaded path?
|
How to properly use target of artifactory rtDownload in jenkins declarative pipeline
|
The issue seems to be with the path that you provided for the artifacts in downstreampaths:
- public/You should change it topaths:
- docs/public/When you include the child pipeline the current working directory doesn't change, it still is inproj/and not indocs/
|
This is my folder tree:proj/
├─ src/
├─ docs/
│ ├─ public/
│ │ ├─ assets/
│ │ ├─ index.html
│ ├─ .gitlab-ci.yml
├─ config/
├─ .gitlab-ci.ymlAs you can see there are two.gitlab-ci.ymlfiles. The first, in the root of the project is themasterpipeline that trigger the second one into thedocsfolder. I would like that the first pipeline in addition to deploy the application (only on a specific branch) trigger the second pipeline and deploy the documentation ongiltab pages.This is the code of thedocs/.gitlab-ci.yml:image: alpine:latest
pages:
stage: deploy
script:
- echo 'Nothing to do...'
artifacts:
paths:
- public/
expire_in: 1 dayAnd this is is the.gitlab-ci.ymlin the project's root:stages:
- deploy-docs
- gen-text
- deploy-in-dev
gen-text:
stage: gen-text
image: python:3.10.0
before_script:
- pip3 install -r ./command/requirements.txt
script:
- python3 ./command/main.py
artifacts:
paths:
- src/languages
expire_in: 1 day
only: ['stg']
deploy-in-dev:
stage: deploy-in-dev
image: node:latest
dependencies:
- gen-text
script:
- echo 'Only stg with artifacts'
only: ['stg']
docs:
stage: deploy-docs
trigger:
include: docs/.gitlab-ci.ymlThe pipeline trigger correctly the downstream but it's fail:missing pages artifacts.
So, how can I pass the public folder to the downstream and why the secondary.gitlab-ci.ymldoesn't see the folder?
|
Deploy gitlab pages using downstream stage
|
I assume DotNetCoreCLI@2 just calls "dotnet run". If your program is aconsoleprogram that "never ends", then I would indeed expect your pipeline to block there.To start your program without blocking, I think that instead of using DotNetCoreCLI@2, you need to insert a "- script" that calls the Windows "start" command to directly call "dotnet run".
|
I'm trying to add a run task using the .Net Core task but after it shows that the app is up on a localhost, it can't move onto the next task. I believe it doesn't move onto the next task because the .Net task never ends. I want the task to run so that it is up on the localhost and then move onto the next task to test it. here's my pipelinetrigger:
- development
pool:
vmImage: 'windows-latest'
resources:
repositories:
- repository: e2ecypress
type: git
name: devopsapp/e2ecypress
jobs:
- job: build_unit_tests
displayName: '.Net Build & Unit Test'
variables:
solution: '**/*.sln'
buildPlatform: 'Any CPU'
buildConfiguration: 'Release'
steps:
- task: NuGetToolInstaller@1
- task: NuGetCommand@2
inputs:
restoreSolution: '$(solution)'
- task: VSBuild@1
inputs:
solution: '$(solution)'
msbuildArgs: '/p:DeployOnBuild=true /p:WebPublishMethod=Package /p:PackageAsSingleFile=true /p:SkipInvalidConfigurations=true /p:PackageLocation="$(build.artifactStagingDirectory)"'
platform: '$(buildPlatform)'
configuration: '$(buildConfiguration)'
- task: DotNetCoreCLI@2
inputs:
command: 'run'
projects: '*/*.csproj'
- task: VSTest@2
inputs:
platform: '$(buildPlatform)'
configuration: '$(buildConfiguration)'
|
Getting .Net run to continue to next task in pipeline
|
You can use Getmetadata activity to get the list of child items and then filter based on the latest modified dt for the latest file and use it as source in copy activity.
|
I am trying to load data from csv file to Azure SQL DB using copy activity. First I loaded three files from blob storage to Azure SQL DB. Then again Three new files are uploaded to blob storage and now I want to load only newly added file to azure sql db. File name are in this format: "student_index_date" where index are from 1-6 and I have to make use of this index.
|
How to load only newly added file in Azure SQL DB
|
try add slash in dir-pathUnit Tests:
stage: Pre-Build
allow_failure: true
script:
- npm ci
- npm run test
artifacts:
paths:
- coverage/
when: always
|
UPDATE: added when:always under artifacts fixed the issue, since the unit tests were failing, so the coverage folder was not created as an artifactWhen unit tests are run, a coverage folder is created. I want to save that coverage folder as an artifact in the pipeline so that sonarqube can access the reports in that folder to give an accurate coverage report. When I push up any code, I'm not seeing the coverage folder being saved as an artifact after the unit tests are run in the pre-build stage, so it is not being passed along to sonarqube in the build stage.This is the yml file:stages:
- Pre-Build
- Build
- etc.
Unit Tests:
stage: Pre-Build
allow_failure: true
script:
- npm ci
- npm run test
artifacts:
paths:
- coverage
when: always
SonarQube:
stage: Build
needs: ['Unit Tests']
except:
refs:
- tags
|
Why is a job artifact not being added in the pipeline?
|
The API needs to be internet-facing for post-deployment gates to be able to resolve and call it.The workaround is to run it as a final step in your pipeline, assuming you run your pipeline on an on-prem agent that has a network route to the API in question. However, you miss out on the asynchronous and retryable nature of the post-deployment gate -- it will call the API once and only once.
|
I need to trigger an on-prem api after my Azure release pipeline successfully finished. I saw that Azure Post Deployment Gates are exactly meant for that.Issue I face is that I need to create a Generic Service Connection to my on-prem server from DevOps server. I do not know if that is possible security wise. All the cases found on internet were related to deployed APIs over the internet and not only available within someone's organization.Could you please share if that is still possible to do?https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/utility/http-rest-api?view=azure-devops
|
Azure Generic Service Connection to on-prem server
|
No, clearly there's no problem, since real CPUs do this all the time. (e.g. Intel since Haswell can run 4 independentaddinstructions per clock:https://www.realworldtech.com/haswell-cpu/4/https://uops.info/https://agner.org/optimize/).It only has to maintain theillusionof having run instructions one at a time,following the ISA's sequential execution model. The same concept as the C "as-if" rule applies.If the ISA doesn't guarantee anything about timing, like that you can delay N clock cycles with Nnopor other instructions, nothing stops a specific implementation from doing as much work as possible in a clock cycle. (Some microcontrollers do have specific timing guarantees or specifications, so codecandelay for N cycles with delay loops. Or at least specific implementations of some ISAs have such guarantees.)It's 100% normal for modern CPUs to average more than 1 instruction per clock, despite stalling sometimes on cache misses and branch mispredicts, so that clearly means fetching, decoding, and executing multiple instructions per clock cycle in other cycles. See alsoModern Microprocessors
A 90-Minute Guide!for some basics of superscalar in-order and out-of-order pipelines.
|
The question comes from a RISCV implementation, but I think it may also apply to many other architectures.From a code with two completely independent instructions in sequence (generic ISA notation):REG1 = REG2 + REG3
REG4 = REG5 + REG6In a pipelined implementation, assuming there are no other hazards (simultaneous r/w access to the registers is possible and there are two independent adders), is it a violation of the ISA if the two instructions are executed completely in parallel?In other words, at the same clock edge, can the 3 registers (REG1, REG4 and PC) be updated at once (PC+8 for the RISCV-32 example)?
|
Completely simultaneous execution of two instructions (RISCV)
|
The problem is not the ls.Even if logs/repologs is an empty directory,lswould still set exit status 0. If log/repologs does not exist,lssets exit code 2. You could catch the latter by guarding the whole pipe with a[[ -d logs/repologs ]] && ...The main problem is the grep:If the directory is empty,grepdoes not get any input, and therefore your pipefail fires. You could avoid this by doing als -a logs/repologsbut this produces at least two additional entries (.and..) and yourwccount would be off by 2. It would also include other hidden entries in the directories.However, what's the purpose of the whole statement? If you just want to count the number of non-hidden entries in the directory, your method is unreliable anyway, for if you have a file where the name contains an embedded newline character, it would be counted as two entries.A more reasonable approach would be to load all the files into an array and take the length of the array:shopt -s nullglob
files=(logs/repologs/*"$name"*)
echo Number of non-hidden entries : ${#files[*]}UPDATE:For completeness: My solution is a bit different to yours in the following respect:Assume that you would setname=foo.barIn your solution, entriesfooxbarandfooybarwould be counted as well, while in my solution, only a literalfoo.barwould be counted. The same applies to other characters which have a special meaning inside a simple regular expression.
|
So I have this pipeline with set -o pipefail in the top of script.local numbr_match
numbr_match=$(ls -t1 logs/repologs | grep "$name" | wc --lines)
if [ ! -z "${numbr_match}" ] && [ "${numbr_match}" -le
"$logrot_keep"];then
echo "Found $numbr_match backups matching '$name', don't need to
remove until we have more than $logrot_keep"
elseIf ls -t1 are not finding anything therefore grep fails I belive whole pipeline fails with pipefail. Whats the best solution to use to come around this?
|
Bash- how to handle pipeline with multiple argument and -o pipeline enable
|
Maybe an approach like this is suitable:var mycollectionData = db.getCollection("mycollection").findOne({...})
[
{
$addFields: {
test: mycollectionData
}
}
]If your collection has more than one document, thenlocalField/foreignFieldcan be simulated like this:var mycollectionData = db.getCollection("mycollection").find({...}).toArray()
[
{
$addFields: {
test: {
$filter: {
input: mycollectionData,
as: "foreign",
cond: {$eq: ["$$foreign._id", "$_id"] }
}
}
}
}
]
|
I want to use the value of a field as the value for the from-collection of a pipeline.[
{
$addFields: {
fromCollection: 'mycollection' // this would be of course a value of the object
}
},
{
$lookup: {
from: '$fromCollection', // here I want to use a field reference, not a static value
pipeline: [ ... ],
as: 'test'
}
}
]But this isn't working, if I use instead the value directly in the lookup:'from': 'mycollection'everything works as expected. Is there any way I can use the value of a field as the from-collection-name?MongoDB Version:db version v5.0.5
|
Mongo: Use field value as pipeline from-collection
|
In a gitlab-ci.yaml you can define a globalbefore_script. It would look something like this.stages:
- check-code
before_script:
- C:\Users\9279\Documents\WindowsPowerShell\profile.ps1
- conda activate temp
run_tests:
stage: check-code
script:
- pytest test.py
type_checker:
stage: check-code
script:
- (ls -recurse *.py).fullname | foreach-object {echo "`n$_`n";mypy --strict $_}I would highly recommend you to read thegitlab-ci.yaml documentation. As there are way more nice functions like this.
|
I've set up a.gitlab-ci.ymlfile as follows for my self-hosted runner.stages:
- set-environment
- check-code
set-environment:
stage: set-environment
script:
- C:\Users\9279\Documents\WindowsPowerShell\profile.ps1
- conda activate temp
run_tests:
stage: check-code
script:
- pytest test.py
type_checker:
stage: check-code
script:
- (ls -recurse *.py).fullname | foreach-object {echo "`n$_`n";mypy --strict $_}I intended to use theset-environmentstage to makemypyandpytestavailable to the subsequentcheck-codestage. Unfortunately, that's not how it works. GitLab destroys the shell after each stage completes.I know this is a flaw in my understanding of how the Gitlab Runner works. How can I have the commands inset-environmentrun beforerun_testsandtype_checkerwithout repeating them in the YML file?
|
YML syntax: How do I get the same commands to run before each stage without repeating myself in the YML file?
|
You can use RDD repartitioner plugin from the hub before the CSV output sink to create 1 partition. This one partition will be written to the single file. Please look at the documentation tab of the plugin for more details.Thanks and Regards,Sagar
|
I'm running an ETL pipeline through Google Cloud Data Fusion. A quick summary of the pipeline's actions:Take in a csv file which is a list of namesTake in a table from bigquery-public-dataJoin the two together and then output the results to a tableAlso output the results to a Group By, where is consolidates duplicates, and sums their scores.Output the resulting list of author names and scores to both a table, and a CSV file in a Google Cloud Storage bucket.All of this should be working properly, the two tables are appearing with the correct data, and are queryable.However, the CSV output from the Group By is coming out into the GCS bucket as 37 different parts, each named with the default naming system ("part-r-00000" to "part-r-00036"). They do appear in the CSV format (both text/csv and application/csv have resulted in usable CSV files.I want the output to export into the GCS bucket folders as a single csv file with a given name (author_rankings.csv). Below I'm attaching a screenshot of the pipeline and an image of some of the output. Please let me know if I can provide any additional information.Thank you for any insight.Data Fusion pipelineCurrent Output as many files
|
How to output write to a single CSV file from inside Google Cloud Data Fusion
|
I think you may be confusing what is and isn't available (and I think that answer you linked is too). You can absolutely use variables to populate what runner your job should run with, I use it today in workflows on the SaaS offering.Using variables to determine the runner tag can still be confusing though, because whether or not the variables works properly is heavily dependent on where you're defining it. If your variable is within theroot scopeof the CI/CD pipeline (I.e., either within a top-levelvariableblock or within aworkflowblock) itwill work properly. If you're attempting to define the rules within the scope of a job (I.e., within ajob:rules:if:variablesblock), itwill not work properly. Since your above example is within the workflow block, it will properly select your tag and apply it to downstream jobs.You can see more information here:https://gitlab.com/gitlab-org/gitlab/-/issues/35742#note_704836498Here is an example proving out the dynamic tag:workflow:
rules:
- variables:
RUNNER: shared-macos-amd64
test:
image: alpine:latest
tags:
- $RUNNER
script:
- echo $CI_RUNNER_DESCRIPTIONThis properly picks up on the macos runner (and prints an error since I'm not in their beta):
|
I found thefollowing proposaland tested it out (see code sample), but could not make it work.We run on Gitlab 14.3.4, how can I determine if this is available for this version? If this feature is not working, how can I deploy to different environments if I have different runners one for my prod one for dev environment? So far, I have one pipeline for each environment using its dedicated tags - as dynamic tags arenot available so far.Any help would be appreciated - thanks!workflow:
rules:
- if: '$CI_PIPELINE_SOURCE == "web"'
- if: '$CI_PIPELINE_SOURCE == "parent_pipeline"'
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
- if: "$CI_COMMIT_BRANCH && $CI_OPEN_MERGE_REQUESTS"
when: never
- if: '$CI_COMMIT_BRANCH =~ /^feature.*$/'
variables:
TARGET: dev
- if: "$CI_COMMIT_BRANCH"
|
How to deploy to different enviroments based on workflow variables?
|
Usepd.to_datetimeinstead ofapply+lambda+datetime.strptime:df.assign(date=pd.to_datetime(df['date'], format='%Y-%m-%d'))
|
I have a pandas dataframe that contains a date column where the dates are stored as strings:0 2021-12-04
1 2021-12-01
2 2021-11-29
3 2021-11-15
4 2021-11-06
Name: date, dtype: objectI have a solution that uses variable assignment:df['date'] = df.apply(lambda x: datetime.strptime(x['date'], '%Y-%m-%d'), axis=1)But since this transformation is part of a data pipeline, I want to use theassignmethod. I tried:df.assign(date=df['date'].apply(datetime.strptime('%Y-%m-%d')))But this produces an error:KeyError: 'date'.I suspect this is because the values from thedatecolumn aren't being passed todatetime.strptime('%Y-%m-%d'). What is the best way to solve this error?
|
Pandas how to apply a function that takes two arguments
|
LinearDiscriminantAnalysisis a is a dimensionality reduction technique that can be compared toPCA. Therefore it can be used within a pipeline as preprocessing.It is possible that classifier that used its result end up with the same score as LDA project inputs to the most discriminative directions.Below is an example of a pipeline that is using LDA as a preprocessing steps:from sklearn.pipeline import make_pipeline
from sklearn.feature_selection import VarianceThreshold
from sklearn.preprocessing import Normalizer
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.datasets import make_classification
from sklearn.linear_model import LogisticRegression
X, y = make_classification(n_classes=2)
pipe = make_pipeline(VarianceThreshold(),
Normalizer(),
LinearDiscriminantAnalysis(),
LogisticRegression())
pipe.fit(X, y)
|
So this is what I put together to run the data through variance threshold for feature selection, then normalizer and LDA for dimensionality reduction.The LDA element I'm not too sure about as I can't find any examples of this being used in a pipeline (as dimensionality reduction / data transformation technique as opposed to a standalone classifier.)I am a bit worried, as when this is used and the transformed data passed on to a series classifiers - they result in a series of identical accuracy, precision, recall and F1 scores. Only the application of AdaBoost brings back something different.Is there something I'm doing wrong here?pipeline = Pipeline([
('feature_selection', VarianceThreshold()),
('normaliser', Normalizer()),
('lda', LinearDiscriminantAnalysis())], verbose = True)
X_train_post_pipeline = pipeline.fit_transform(X_train, Y_train)
X_test_post_pipeline = pipeline.transform(X_test)
|
Can you use LDA (Linear Discriminant Analysis) as part of sklearn pipeline for preprocessing?
|
Yes, you can use multiple source and sinks in a single data flow and reference same source over join activity. And order sink write using Custom sink ordering propertyI am using Inline dataset but you can use any typeUsinginline datasetto store the result insink1. Insource3, use the sameinline datasetto join withSource2Make sure you give the sink order correctly, if you have the wrong order or if it encounters no data while proceeding with transformation, it will publish with no errors however the pipeline run would fail.Refer MS DOC:Sink ordering
|
I am trying to load the sales data to the database using Azure Synapse Analytics pipelines, and the strategy is as follows (scenario is made up):Load the students` data to the table StudentsLoad the students` classes information to the table StudentsClasses. In this data flow I need to join the data with the Students table (obviously, the new data about students must be loaded to Students at this join step)Can I have these two processes in the same data flow with Sink ordering? Or does the sink ordering not define source read ordering? (that is, the source reading and transformations are done in parallel, and only the write is according to the ordering?Edit: This is an example data flow that I want to implement:source3 and sink1 are the same table. What I want is to first populate sink1, then use it for source 2 to join with it. Can this be implemented using Sink ordering? Or source3 will be empty regardless of sink ordering?
|
Can you use a data flow sink as a source in the same data flow?
|
You might be able to use theRun Inline PowershellAzure Pipelines task, like so:- task: InlinePowershell@1
inputs:
Script: |
$foo = 'bar'
Write-Output "##vso[task.setvariable variable=foo;]$foo"
|
I'm currently using an extension to create the git tag, my problem is that I'm using the Azure Platform to add the variables instead of the yaml file.- task: GitTag@6
displayName: 'Creating Git Tag'
inputs:
workingdir: '$(SYSTEM.DEFAULTWORKINGDIRECTORY)'
git: '$(Major).$(Minor).$(Patch)'The Variables are:Name:
Major
Value:
1
Name:
Minor
Value:
0
Name:
Patch
Value:
$[counter(format('{0}.{1}', variables['Major'], variables['Minor']), 0)]The current output is the following:1.0.0My question would be: How can I declare the Patch as a variable in the yaml file. I tried to add the following variables:- name: Major
value: 1
- name: Minor
value: 0
- name: patch
value: echo "##vso$[counter(format('{0}.{1}', variables['Major'], variables['Minor']), 0)]"Now my new output is:1.0.$[counter(format('{0}.{1}', variables['Major'], variables['Minor'])Does anyone knows how can I add a script as a variable inside of the yaml file?
|
What is the best way to create git tags from Azure Pipelines?
|
SolutionI included an imperative (scripted) example usingCalendar.Imperative Pipeline Examplenode {
def today = Calendar.getInstance();
def dayOfWeek = today.get(Calendar.DAY_OF_WEEK);
if( dayOfWeek == Calendar.FRIDAY ) {
stage('build on friday') {
echo "Hai its friday"
}
}
if( dayOfWeek == Calendar.SUNDAY ) {
stage('build on sunday') {
echo "Hai its sunday"
}
}
}Obviously a run would need to occur on Friday and Sunday, so your cron would need to reflect that. For example:0 1 * * 0,5
|
My jenkins scripted pipeline has 2 different stages one must be built on Friday and another stage must be built on Sunday...I tried with cron job each for different stages, but I want to filter based on the day of the week, can I use any when the condition for this.?node {
stage('build on friday ') {
echo "Hai its friday"
}
stage('build on sunday') {
echo "Hai its sunday"
}
}
|
i have a single pipeline with two stages each stage must be called on two different days of week
|
This is possible. You would need to generate a file that, when parsed generates DAG B. You could not just generate a DAG object as part of DAG A. You will need write access to the DagBag folder across all your Airflow components (scheduler, webserver, and workers). Also, allow some time for the DAG B file to be parsed before triggering it with TriggerDagRun Operator.All of this is a little hacky.AIP-42is being worked on that will allow for modifying the tasks within a DAG based on other results which I think will help you get the desired effect.
|
I'd like to know if you can easily, dynamically generate a DAG within a DAG run, like:DAG A would be:Do some data computationsBased on the previous result generate DAG BLoad DAG B into the airflow DagBagTriggerDagRunOperator for DAG BExternalTaskSensor on last DAG B task.Cheers,
|
Dynamically generate a DAG during a DAG Run - Airflow
|
envdoesn't work in the static property, because the property is already initialized before you enter the node closure. Soenvjust isn't available yet.I see two ways around this:Turn the property into a function and pass theenvvariable as parameter.Make it a non-static function and passenvto the class constructor.I would propably go with the latter as it will be easier to use when you have many test settings.class TestSettings {
public static String getNuGetPackagesPath( def env ) { "${env.USERPROFILE}\\.nuget\\packages" }
}
class TestSettings2 {
def env = null
TestSettings2( def env ) {
this.env = env
}
public String getNuGetPackagesPath() { "${env.USERPROFILE}\\.nuget\\packages" }
}
node("master"){
println env.USERPROFILE
println TestSettings.getNuGetPackagesPath( env )
def testSettings = new TestSettings2( env )
// Note that we can use the method like a property!
println testSettings.nuGetPackagesPath
}
|
I have a distributed Jenkins build and the user under which the jenkins process runs on the slaves is not necessarily static, so I need a mechanism to get the user per node.I am trying something like#!/usr/bin/env groovy
class TestSettings {
public static String NuGetPackagesPath = "${env.USERPROFILE}\\.nuget\\packages"
}
node("master"){
println env.USERPROFILE // works as expected
println TestSettings.NuGetPackagesPath // throws exception
}
node("build"){
println env.USERPROFILE // works as expected
println TestSettings.NuGetPackagesPath // throws exception
}
|
Jenkins Groovy Pipeline Get (Windows)User Folder Per Node
|
The reason is"..."context does not exist in your kubeconfig file. You can runkubectl config view -o jsonpath='{.current-context}'to check current context and use that context.As per inthis documentSet which Kubernetes cluster kubectl communicates with and modifies configuration information.
|
My groovy pipeline has 3 steps (all with shell):stage 1:authenticate to GKE cluster and update kubeconfigstage 2:helm install on that cluster (using --context)stage 3:kubectl wait for condition (using --context)Now most of the time these jobs run fine with no issues at all.
But a few days ago it gave me this error on stage 3:error: context "..." does not existI can't figure out why this failed once, and unfortunately I don't have the full log of that job any more.
It's weird, as the context worked for the helm install stage, so how could it be not found all of a sudden?what do you think can cause this random issue? How can I avoid it in the future?
|
jenkins kubernetes context not found
|
I used js and Lodash to merge two scheme files into one, it works like magic.const _ = require("lodash");
const schema = require('./schema.json');
const extention = require("./extentions.json", 'utf-8');
var newschema = _.merge(schema, extention);
const fs = require('fs')
fs.writeFile("./newschema.json", JSON.stringify(newschema, null, 4), (err) =>
{
if (err) throw err;
});
|
I'm trying to find the solution to add extra field to the existing json schema in a new json schema file, and generate them in github pipeline and the project can use them. But I'm not sure how to do it. for example, the existing schema below:{
type: 'object',
required: [ 'product' ],
additionalProperties: false,
properties: {
productName: {
enum: [ 'product1', 'product2', 'product3' ],
},
price: { type: 'int' },
},
};But know I want to add more fields in properties in additional json, how do I do it? for example I needproduct quantitynow, so I can compile two json schema and form one complete class. Also is it possible to reference definition from other schema or I always have to define them respectively.Very much appreciated if you could help me out.
|
Add extra fields to existing json schema and generate c# class on fly
|
How to do perform a task branching in azure pipelines in case of failure?You could add a command-line or inline PowerShell task betweennpm1andnpm2ato invokeLogging Commands:Write-Host "##vso[task.setvariable variable=testvar;]npm2b"Then we set the condition for this task withOnly when a previous task has failed:And set a variabletestvarwith default valuenpm2a.Besides, set the condition for the tasknpm2aandnpm2b:and(succeeded(), eq(variables['testvar'], 'npm2a'))
and(failed(), eq(variables['testvar'], 'npm2b'))
|
I have 3 task below.
What I want to do is when npm1 succeeds it will perform npm2a and skip npm2b.
And when npm1 fails, it will skip npm2a and execute npm2b.How can this be done in Azure pipelines?
Please don't point me to a tutorial.
|
How to do perform a task branching in azure pipelines in case of failure?
|
This answer I received from the Atlassian community- step:
name: upload to test
image:
name: ci:latest
script:
- bin=`ls | grep .bin`
- echo export VERSION=${bin%.*} >> build.env
- aws s3 sync . s3://somebacketname/test/
artifacts:
- build.env
- step:
name: testing
trigger: manual
script:
- source build.env
- pipe: atlassian/trigger-pipeline:4.1.5
variables:
BITBUCKET_USERNAME: $USER
BITBUCKET_APP_PASSWORD: $PASSWORD
REPOSITORY: 'test'
BRANCH_NAME: 'master'
CUSTOM_PIPELINE_NAME: 'critical-test'
WAIT: 'true'
PIPELINE_VARIABLES: >
[{
"key": "DESIRED_VERSION",
"value": "$VERSION"
},
{
"key": "DURATION",
"value": "15"
}]
|
I have a problem with pass env variable from one step, to a step that triggers another pipeline in another repo .
After extracting VERSION I need to trigger another pipeline.I think it's just a common case to pass some variable from parent step to next step, but no information on how to do it .
In atlassian/trigger-pipeline I can't run any script steps before I trigger another pipeline
pipeline example:- step:
name: upload to test
image:
name: ci:latest
script:
- bin=`ls | grep .bin`
- export VERSION=${bin%.*}
- aws s3 sync . s3://somebacketname/test/
- step:
name: testing
trigger: manual
script:
- pipe: atlassian/trigger-pipeline:4.1.5
variables:
BITBUCKET_USERNAME: $USER
BITBUCKET_APP_PASSWORD: $PASSWORD
REPOSITORY: 'test'
BRANCH_NAME: 'master'
CUSTOM_PIPELINE_NAME: 'critical-test'
WAIT: 'true'
PIPELINE_VARIABLES: >
[{
"key": "DESIRED_VERSION",
"value": "$VERSION"
},
{
"key": "DURATION",
"value": "15"
}]
|
Bitbucket pipelines environment variables to trigger-pipeline step
|
This doesn't work on afilter, but you can make use of theAutomatic variable$inputif you make a function out of this:function Out-String2 {
Param(
[Parameter(Mandatory = $true, Position = 0, ValueFromPipeline = $true)]
$Data
)
# if the data is sent through the pipeline, use $input to collect is as array
if ($PSCmdlet.MyInvocation.ExpectingInput) { $Data = @($input) }
$Data | Out-String -Stream -Width 9999 | ForEach-Object { "$($_.Trim())`r`n" }
}Call it usingWrite-Host (Out-String2 (Get-ChildItem env:))orWrite-Host (Get-ChildItem env: | Out-String2)
|
I'd like to be able to write a version of PowerShell'sOut-Stringthat doesn't pad or truncate the resulting output. I can achieve something like this using the following code:Write-Host (Get-ChildItem env: | Out-String -Stream -Width 9999 | ForEach-Object { "$($_.Trim())`n" })Resulting in the desired output:Name Value
---- -----
ALLUSERSPROFILE C:\ProgramData
APPDATA C:\Users\Me\AppData\Roaming
... etc ...I'd like to capture this functionality into a new version ofOut-String, but can't work out how to accomplish this. This is as close as I've got:filter Out-String2 {
$_ | Out-String -Stream -Width 9999 | ForEach-Object { "$($_.Trim())`n" }
}
Write-Host (Get-ChildItem env: | Out-String2)But this results in the undesirable effect of having each element ofGet-ChildItembeing rendered independently in the filter resulting in output like:Name Value
---- -----
ALLUSERSPROFILE C:\ProgramData
Name Value
---- -----
APPDATA C:\Users\Me\AppData\Roaming
...etc...Is this kind of composition of pipeline elements possible in PowerShell, and if so - how?
|
Composing PowerShell's Out-String within another function
|
In order for a scheduled pipeline to be created successfully:The schedule owner must have permissions to merge into the target branch.The pipeline configuration must be valid.Otherwise the pipeline is not created.Source: /help/ci/pipelines/schedulesMake sure the schedule owner has permission to merge to the target branch. e.g under “protected branches”
|
I have a “Developer” role and I couldn't see Play button if I scheduled job for master branch. Whereas for other sprint branches, play button is visible. Is the role need to upgrade? Can some one help me to understand?
|
Play button not available for Scheduled Master branch for Developer role in git lab runner
|
That's definitely a good strategy - one separate machine/host/whatever to handle deployment (Gradle) and batch processing (MLCP/Corb) tasks. As you note, you don't need Gradle (nor MLCP or Corb) available on any of the hosts. For MLCP and Corb in particular, it's best to run those on a separate machine so that they're not competing for system resources with any of the ML hosts.
|
The recommended MarkLogic automation is via Gradle.I wonder where there should a dedicated single VM to run those Gradle tasks to control the configuration and deployment to different Prod / UAT / Test / Dev ML cluster enviroments.Is my below understanding correct?One single dedicated VM should be specificallyincludedin the overall design to hande ML configuration deployment for different ML enviroments.Gradle is not required in each ML hosts. Gradle should be only used in that ML management depot.In each gradle properties, the load balancer IP for each ML enviroment cluster should be used.https://i.stack.imgur.com/zsh1O.pngScope of that [ML Management Depot] VMML XQuery Git deployment pipelineAny post deployment tasks like postman script execution or additional xquery executionMLCP tasks
(a) Data Sync Prod → UAT → Test → Dev(b) Documents ingestion (XML, PDF, HTML, others)
(c) Documetns export for raw files backup to Azure Data LakeCorbs tasks - bulk data update and report(All those should be configured as gradle tasks.)
|
How to effectively manage the ML code deployment via gradle to multiple ML cluster enviorments
|
Along with my team we were investigating the root cause for this issue and we were unable to find a solution. However most frequent reason was that the GitLab-Runner running in a different server has connection issues such that this happens.One of my team members found a solution and that worked for us. Our issue solved and now the GitLab pipeline builds and runs successfully. That worked for us and hope anyone will the similar issue would also find this helpful.<pluginRepositories>
<pluginRepository>
<id>central</id>
<name>Central Repository</name>
<url>https://repo.maven.apache.org/maven2</url>
<layout>default</layout>
<snapshots>
<enabled>false</enabled>
</snapshots>
<releases>
<updatePolicy>never</updatePolicy>
</releases>
</pluginRepository>
</pluginRepositories>
<repositories>
<repository>
<id>central</id>
<name>Central Repository</name>
<url>https://repo.maven.apache.org/maven2</url>
<layout>default</layout>
<snapshots>
<enabled>false</enabled>
</snapshots>
</repository>
<repositories>Adding the above mentioned code block to POM file solved the issue.
|
When I trigger the build pipeline of my application, I get a failed to execute a goal error. However, when I build this application locally, I don't get such exception. When I build via GitLab only I get into this situation.[ERROR] Failed to execute goal org.apache.maven.plugins:maven-clean-plugin:2.5:clean (default-clean) on project sample-app: Execution default-clean of goal org.apache.maven.plugins:maven-clean-plugin:2.5:clean failed: Plugin org.apache.maven.plugins:maven-clean-plugin:2.5 or one of its dependencies could not be resolved: Failed to collect dependencies at org.apache.maven.plugins:maven-clean-plugin:jar:2.5 -> org.apache.maven:maven-plugin-api:jar:2.0.6: Failed to read artifact descriptor for org.apache.maven:maven-plugin-api:jar:2.0.6: Could not transfer artifact org.apache.maven:maven-plugin-api:pom:2.0.6 from/to central (https://repo.maven.apache.org/maven2): Connection reset -> [Help 1]For a better look, an image of the error is also attached below...
|
Failed to execute goal in GitLab CI Pipeline
|
The scaler is being applied. Scaling has no effect on an unpenalized linear regression, and so it is expected that the cross-validation scores would be the same. Compare withLasso, where scaling does matter.
|
I have an issue when doing some cross validation using Scikit Learn.
I have build a pipeline consisting of 2 elements: a scaler and a regression model.My issue is that I found out that the scaler method that I had configured in the pipeline was not applied (i.e.; taken into account) during the calculations.Please, check my notebook and tell what is wrong.
Here's the link:https://colab.research.google.com/drive/1KHqHsDHNkGLj4e0u-EWY9oj00NXeO5u3?usp=sharingAnd here's also the link to the dataset that I have used:https://drive.google.com/file/d/1nyx0BitzxBLQjsAAAxfHt-9SzKqk9dWv/view?usp=sharingBest regards.
|
Cross Validation - Scaler Method not Being Applied in a Pipeline
|
$groupbynameandsuband get averagemarksby$avg$groupby the onlynameand construct the subject array$arrayToObjectconvert key-value array of object to an object$replaceRootto replace above object to rootdb.collection.aggregate([
{
$group: {
_id: {
name: "$name",
sub: "$sub"
},
avgMarks: { $avg: "$marks" }
}
},
{
$group: {
_id: "$_id.name",
sub: {
$push: {
sub: "$_id.sub",
avgMarks: "$avgMarks"
}
}
}
},
{
$replaceRoot: {
newRoot: {
$arrayToObject: [
[{ k: "$_id", v: "$sub" }]
]
}
}
}
])Playground
|
I have sample data likedb.student.insert({"name":"Vikash", "sub":"Physics", "marks":10})
db.student.insert({"name":"Vikash", "sub":"Math", "marks":20})
db.student.insert({"name":"Raj", "sub":"Physics", "marks":5})
db.student.insert({"name":"Raj", "sub":"Math", "marks":20})
db.student.insert({"name":"Vikash", "sub":"Physics", "marks":20})
db.student.insert({"name":"Vikash", "sub":"Math", "marks":30})
db.student.insert({"name":"Raj", "sub":"Physics", "marks":40})
db.student.insert({"name":"Raj", "sub":"Math", "marks":10})And Sample output is:{
_id:"Vikash":[{
"sub":"Physics",
"avgMarks":15
},
{
"sub":"Math",
"avgMarks":25
}]
}
{
_id:"Raj":[{
"sub":"Physics",
"avgMarks":22.5
},
{
"sub":"Math",
"avgMarks":15
}]
}
|
Aggregate group multiple fields and calculate average mondodb
|
Example DAGS are just example DAGs - they are "hard-coded" in airflow installation and shown only when you enable them in config. And they are mostly there to be able to quickly see some examples.Your own dags should be placed in ${AIRFLOW_HOME}/dags. not in example_dags folder. Airflow only scans regularly the DAGs folder for changes because it does not expect example dags to change. Ever. It's a strange idea to change data inside installed python package.Just place your dag in ${AIRFLOW_HOME}/dags and if they will have no problems, they should be shown quickly. You can also disable examples inairflow.cfgand then you will have a cleaner list of only your dags from "dags" folder.
|
I ran airflow web server under one of my virtual environments (myenv). When I tried to add some new dummy DAGs, my result didn't go as I expected.Here is the story:First, I created a new DAG which is literally a copy of "example_bash_operator" with another dag_id. Then I put this dag under the same directory as other example days were, which is "~/myenv/lib/python3.8/site-packages/airflow/example_dags". But when I opened the web server UI, this newly created dag wasn't shown.I'm really confused. Should I changeAIRFLOW_HOME? I didexport AIRFLOW_HOME=~/airflowasAirflow documentationindicates. What's more, why the example dags are collected under the virtual environment directory for site packages instead of the airflow home that I declared?with DAG(
dag_id='my_test',
default_args=args,
schedule_interval='0 0 * * *',
start_date=days_ago(2),
dagrun_timeout=timedelta(minutes=60),
tags=['some_tag'],
params={"example_key": "example_value"},
) as dag🔼🔼 This is the only place that I changed fromexample_bash_operator.
|
Add new DAG to airflow scheduler
|
The easiest way to achieve it is using the built in key wordreadJson, which is part of thePipeline Utility Steps(which is usually installed by default):readJSON- Reads a file in the current working directory or a String as a plain text JSON file. The returned object is a normal Map with String keys or a List of primitives or Map.You can use it to read files from the workspace or to parse a given Json text, in both cases it will return a dictionary representation of the given json, which can be then used easily in the code.In your case you it will look like:stage("Using curl example") {
steps {
script {
def url = "http://devrest01.ydilo.es:8080/yoigooq/text?text=hola"
def response = sh(script: "curl -s $url", returnStdout: true)
jsonData = readJSON text: response
echo jsonData.text // you can also use jsonData['text']
}
}
}
|
This question already has an answer here:Parsing JSON on Jenkins Pipeline (groovy)(1 answer)Closed2 years ago.I'm trying to access each data from the json, for example,echo responde.text.
This is the codestage("Using curl example") {
steps {
script {
final String url = "http://devrest01.ydilo.es:8080/yoigooq/text?text=hola"
final response = sh(script: "curl -s $url", returnStdout: true)
echo response
//Here i want to access an specific data from the json
}
}
}
|
Access JSON Data with pipeline [duplicate]
|
Your branch name isdevand not macthing the rule you defined. It's why it's Gitlab claims there are no stages/jobs for this pipeline when you execute the run manually.Please edit the rule to matchdevlike this (double quotes instead of slashes) :rules:
- if: '$CI_MERGE_REQUEST_SOURCE_BRANCH_NAME == "dev"'
when: manual
|
I get an error when I'd like to run my pipeline on mydevbranch.My file.gitlab-ci.ymlon mydevbranch :**stages:
- build
build:
stage: build
rules:
- if: '$CI_MERGE_REQUEST_SOURCE_BRANCH_NAME == /dev/'
when: manual
script:
- echo "Hello World !"First of all, my pipeline is not executed when I create a merge request fromdevtomaster. And the second issue, I get an error message when I try to execute them with the button "Run pipeline" :Pipeline cannot be run.
No stages / jobs for this pipeline.
|
Issue with pipelines on GitLab
|
So this was resolved by fixing one issue and implementing the artifact module properly.In the .sh script there was one section missing, so the output .txt file wasn't being created. The missing part was a specified region:--regionThen the artifact section needed to change to:artifacts:
name: Lambda.txt
paths:
- Lambda.txt
|
I am using GitLab to execute a script, which generates a .txt file. I need to then get that file to export as an artifact using the GitLab artefact module.Below is the cicd pipeline:stages:
- run
variables:
VAULT_ADDR: https://vault:800
build:
stage: run
image:
name: nexus.service:840/terraform:stable
entrypoint:
- '/usr/bin/env'
- 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
only:
- master
script:
- export AWS_ACCESS_KEY_ID="$(vault read -field=value secret/aws/aws_access_key)"
- export AWS_SECRET_ACCESS_KEY="$(vault read -field=value secret/aws/aws_secret_key)"
- ./src/GetFunction.shBelow is the .sh script that the pipeline runs:#!/bin/bash
aws \
resourcegroupstaggingapi \
get-resources \
--resource-type-filters "lambda" \
| jq -r '.ResourceTagMappingList[] | [.ResourceARN, ((.Tags | map([.Key, .Value] | join("="))) | join(","))] | @csv' > Lambda.txtI've tried adding in the artifact module like below, but not having any luck and the job is failing. Without the artifact module, the job runs ok but I am unable to retrieve a .txt file.artifacts:
paths:
- Lambda.txtAny idea? I think the artifact module might be overkill for what I am trying to achieve.
|
Send .txt output of a GitLab CICD pipeline to artifact module
|
There are two nice options that you can use:You can use theGit Parameter Pluginwhich integrates with your SCM step configuration and allows you to exposes parameters that are related to the defined repository - you can easily create a select list of available branches, tags, revisions or pull requests according to your needs.This plugin is best to use if you already have a SCM configuration in your job, as it draws the repository information for that configuration.The second option is to useList Git Branches Parameter
Plugin, this plugin also adds ability to create a parameter that allows to choose branches, tags or revisions from a configured git repository, but unlike the Git Parameter Plugin, this plugin requires a git repository defined instead of reading GIT SCM configuration from your projects.in addition this plugin will not change the workspace at all at build-time.To sum up: when you already have a SCM configuration in your job the Git Parameter is the perfect choice, but sometimes we want to specify a git branch or tag as a parameter before the execution starts, for "Pipeline script" jobs (not "Pipeline script from SCM") that use SCM in the script, it is impossible with Git Parameter Plugin. In this particular case, a plugin that can list remote git branches or tags without defining SCM in the job is needed and theList Git Branches Parameter Plugincan achieve that.
|
So I have defined a Jenkins pipeline that only runs the build when I'm passing a git tag as a string. Is there a way to list all the tags from a specific branch in a dropdown in order to select the one I want to build?
|
Jenkins pipeline parameters
|
I would just use spark for all. Read from sqlserver, do your transformations and write out to mongo (sql server --> spark --> mongoDB):#SQL server
df = (
spark
.read
.format('jdbc')
.option("driver", "com.microsoft.sqlserver.jdbc.SQLServerDriver")
.option('url', 'jdbc url')
.option('user', user)
.option('password', password)
.option('dbtable', 'schema.table')
.option('tempdir', 's3://....')
.option('forward_spark_s3_credentials', 'true')
.load()
)
# Mongo
df = spark.read.format("mongo").option("uri","mongodb://127.0.0.1/people.contacts").load()
df.write.format("mongo").option("uri","mongodb://127.0.0.1/people.contacts").mode("append").save()
|
I need your comments about how to make a pipeline of data from SQL Server to MongoDB Atlas cloud. In this pipeline, I do many complicated tasks of transforming and even analysing and fault detection which needs comparison between current data and previously processed data. There are some criteria of error which can only be calculated when the new data is compared with previously processed ones.
So, the pipeline is not just a single direction one. Can it be done with Kinesis- Lambda pipeline on AWS or it is better to be done with Kafka and Spark on our company's server and then upload the result to the cloud? In both cases, how my transformation unit can read the data from the destination.I have depicted my two ideas in the image bellow.
|
ETL pipeline from SQL Server to MongoDB Atlas
|
This is a common misconception. Linear Model refers to theparameters, not the features. Say you have featuresxand Valuesy. You linear model will bey = a_0 + a_1 * xYou can generate additional features by arithmetic operations like e.g.x**2. Now your model becomesy = a_0 + a_1 * x + a_2 * x**2It is still alinearmodel, becausea_0,a_1anda_2are linear. It just has a polynomial feature.
|
I'm learning Data Analysis with Python and there is something I can not figure out.
I understand that exists three options to develop a model: Linear, Linear Multiple, and Polynomial. However, then I get into a new concept called 'pipelines'. I put some code here:Input=[('scale',StandardScaler()), ('polynomial', PolynomialFeatures(include_bias=False)), ('model',LinearRegression())]Normalization's ok, however, I do not understand why I introduce as a parameter a PolynomialFeature if I will use a Linear Model??? It does not make sense to me. Please, could someone clarify this to me?
|
Pipelines in Pandas Python
|
Just use a separate transformer for each text feature.preprocessor = ColumnTransformer(transformers=[
('numeric', numeric_transformer, numeric_features),
('categorical', categorical_transformer, categorical_features),
('text', text_transformer, 'text_feature'),
('more_text', text_transformer, 'another_text_feature'),
])(The transformers get cloned during fitting, so you'll have two separate copies oftext_transformerand everything is fine. If it worries you to specify the same transformer twice like this, you could always copy/clone it manually before specifying theColumnTransformer.)
|
Similar to this problem (ColumnTransformer fails with CountVectorizer in a pipeline) I want to applyCountVectorizer/HashingVectorizeron a column with Text-features using theColumnTransformerin a pipeline. But I do not have only one text-feature, but multiple. If I pass a single feature (not as list, like suggested in the solution to the other question) it works fine, how do I do it for multiple?numeric_features = ['x0', 'x1', 'y0', 'y1']
categorical_features = []
text_features = ['text_feature', 'another_text_feature']
numeric_transformer = Pipeline(steps=[('scaler', StandardScaler())])
categorical_transformer = Pipeline(steps=[('encoder', OneHotEncoder())])
text_transformer = Pipeline(steps=[('hashing', HashingVectorizer())])
preprocessor = ColumnTransformer(transformers=[
('numeric', numeric_transformer, numeric_features),
('categorical', categorical_transformer, categorical_features),
('text', text_transformer, text_features)
])
steps = [('preprocessor', preprocessor),
('clf', SGDClassifier())]
pipeline = Pipeline(steps=steps)
pipeline.fit(X_train, y_train)
|
ColumnTransformer fails with CountVectorizer/HashingVectorizer in a pipeline (multiple textfeatures)
|
in pipeline file:docker run -e DEPLOY_USER=${DEPLOY_USER} -e DEPLOY_USER_PASSWORD=${DEPLOY_USER_PASSWORD}in package.json script"sendToSharePoint": "m365 login --authType password --userName $DEPLOY_USER --password $DEPLOY_USER_PASSWORD "Still struggling to use secure secrets
|
I am hoping that someone can fill in a knowledge gap for me. In my typescript application I have created a pipeline in my code for my build and this is working. However, in my package.json I currently have my username and password hard-coded. I am assuming that I add the username and password to the environment as secure variables? If so, how do I reference them in typescript? I found examples for other environments but not typescript."sendToSharePoint": "m365 login --authType password --userName myUserName --password myPassword"EDITThe problem seems to be that I am building this through docker and I need to pass through the environment var. Trying to figure out how to do that.exec:
command: bash
arguments:
- -c
- docker run prt-ofg-spet -env USERNAME=$USERNAME npm run-script wholeProcess
|
Trying to add secrets to GoCD pipeline in code, with docker
|
Presumably there's some unexpected line in your CVS file, e.g. a blank one. You could do something likeif len(records) < 2:
raise ValueError("Bad line: %r" % element)
else:
yield records[1]to get a better error message. I would also recommend looking into usingBeam Dataframesfor this kind of task.
|
I have a CSV that I've loaded into Google Cloud Storage, and I am creating a Dataflow pipeline that will read and process the CSV, then perform a count of of listings by a single column.How do I isolate the single column. Let's say the columns are id, city, sports_team. I want to count how many occurrences of a city show up.My starting code is like so:# Python's regular expression library
import re
# Beam and interactive Beam imports
import apache_beam as beam
from apache_beam.runners.interactive.interactive_runner import InteractiveRunner
import apache_beam.runners.interactive.interactive_beam as ib
class SplitRecords(beam.DoFn):
"""Spilt the element into records, return rideable_type record."""
def process(self, element):
records = element.split(",")
return [records[1]]
p = beam.Pipeline(InteractiveRunner())
lines = p | 'read in file' >> beam.io.ReadFromText("gs://ny-springml-data/AB_NYC_2019.csv", skip_header_lines=1)
records = lines | beam.ParDo(SplitRecords())
groups = (records | beam.Map(lambda x: (x, 1)) | beam.CombinePerKey(sum))
groups | beam.io.WriteToText('TEST2.txt')I am getting an IndexError: list index out of range.... I'm extremely newb at all of this, so any help is appreciated.
|
Perform a transformation on a single column in Apache beam
|
All I did to make this request possible is as below:added a post block of code below the steps block code.post {
Success {
findText alsoCheckConsoleOutput: true, refexp: 'There are test failures.', unstableIfFound: true
}
}
|
I have a stage in Jenkins as follows, How do I mark the build to fail or unstable if there is a test case failure? I generated the script pipeline for textfinder plugin but it is not working. "findText alsoCheckConsoleOutput: true, regexp: 'There are test failures.', unstableIfFound: true" not sure where to place the textFinder regex.pipeline {agent none
tools {
maven 'maven_3_6_0'
}
options {
timestamps ()
buildDiscarder(logRotator(numToKeepStr:'5'))
}
environment {
JAVA_HOME = "/Users/jenkins/jdk-11.0.2.jdk/Contents/Home/"
imageTag = ""
}
parameters {
choice(name: 'buildEnv', choices: ['dev', 'test', 'preprod', 'production', 'prodg'], description: 'Environment for Image build')
choice(name: 'ENVIRONMENT', choices: ['dev', 'test', 'preprod', 'production', 'prodg'], description: 'Environment for Deploy')
}
stages {
stage("Tests") {
agent { label "xxxx_Slave"}
steps {
checkout([$class: 'GitSCM', branches: [[name: 'yyyyyyyyyyz']], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: 'zzzzzzzzzzz', url: 'abcdefgh.git']]])
sh'''
cd dashboard
mvn -f pom.xml surefire-report:report -X -Dsurefire.suiteXmlFiles=src/test/resources/smoke_test.xml site -DgenerateReports=false
'''
}
}
}}
|
how to fail the jenkins build if any test cases are failed using findText plugin
|
Since your estimators arePipelineobjects, thebest_estimator_attribute will return a pipeline as well. You have to further access the correct step with your regressor by indexing it, for example:plot_tree(
Dtree.best_estimator_['regressor'], # <-- added indexing here
max_depth=5,
impurity=True,
feature_names=['X1', 'X2', 'X3', 'X4'], # changed this argument to make it work properly
precision=1,
filled=True
)See theuser guideon different methods to access pipeline steps.In case you are wondering why your error message says the pipeline is not fitted, you can read more about it in my answerhere.
|
This question already has an answer here:sklearn "Pipeline instance is not fitted yet." error, even though it is(1 answer)Closed2 years ago.I have GridsearchCV with a pipeline using decision tree as estimatorNow i want to plot the decision tree corresponding to the best_estimator of the GridsearchCVThere are some replys on stackoverflow but none consider a pipeline inside the GridsearchCVfrom sklearn.preprocessing import StandardScaler
from sklearn.tree import DecisionTreeRegressor, plot_tree
from sklearn.pipeline import Pipeline
from sklearn.model_selection import GridSearchCV
import numpy as np
#Dummy data
X= [[1,2,3,5], [3,4,5,6], [6,7,8,9], [1,2,3,5], [3,4,5,6], [6,7,8,9]]
y= [50,70,80,2,5,6]
scr = StandardScaler()
dtree = DecisionTreeRegressor(random_state=100)
pipeline_tree = Pipeline([
('scaler', scr),
('regressor', dtree)
])
param_grid_tree = [{
'regressor__max_depth': [2, 3],
'regressor__min_samples_split': [2, 3],
}]
GridSearchCV_tree = GridSearchCV(estimator=pipeline_tree,
param_grid=param_grid_tree, cv=2)
Dtree = GridSearchCV_tree.fit(X, y)
plot_tree(Dtree.best_estimator_, max_depth=5,
impurity=True,
feature_names=('X'),
precision=1, filled=True)I getNotFittedError: This Pipeline instance is not fitted yet. Call 'fit' with appropriate arguments before using this estimator.Any ideas?
|
Plot best decision tree with pipeline and GridsearchCV [duplicate]
|
I fixed it .. I made a mistake in my gitlab-ci.yml fileIn the artifact:paths and artifact:reports:junit tags , I should add /*.xml in the path :artifacts:paths:
- $CI_PROJECT_DIR/build/junitreport/*.xmlreports:
junit: $CI_PROJECT_DIR/build/junitreport/*.xml
when: always
|
I want to test gitlab ci/cd with a simplehelloWorldproject of one main class and one test class, build withant, but when I run the pipeline it gives me this error :WARNING: Uploading artifacts as "junit" to coordinator... failed id=13076195 responseStatus=500 Internal Server Error status=500 token=fxB2Np58
WARNING: Retrying... context=artifacts-uploader error=invalid argument
WARNING: Uploading artifacts as "junit" to coordinator... failed id=13076195 responseStatus=500 Internal Server Error status=500 token=fxB2Np58
WARNING: Retrying... context=artifacts-uploader error=invalid argument
WARNING: Uploading artifacts as "junit" to coordinator... failed id=13076195 responseStatus=500 Internal Server Error status=500 token=fxB2Np58 FATAL: invalid argument Cleaning up file based variables 00:00 ERROR: Job failed: command terminated with exit code 1here is mygitlab-ci.yml:image: java:latest
stages:
- test
test-stage:
image: frekele/ant
stage: test
script:
- echo starting tests
- ant junit
artifacts:
paths:
- $CI_PROJECT_DIR/build/junitreport
reports:
junit: $CI_PROJECT_DIR/build/junitreport
when: always
|
gitlab ci/cd Uploading artifacts as "junit" to coordinator... failed 500
|
You can build fairly complex conditions withrules, which you use should anyway as work ononly/exceptis discontinued.You can combine two conditions in rules with the&&operator, so e.g. run the job only on merge-requests and if$CUSTOM_VARIABLEis true.rules:
- if: '$CUSTOM_VARIABLE == "true" && $CI_PIPELINE_SOURCE == "merge_request_event"'But checking if the branch is master and a tag was created is not trivial, as there are no predefined varariables for the branch from which a tag was created.So if you only create tags from master, checking if the pipeline is a tag pipeline wpuld be sufficient.stages:
- greet
greet_job:
stage: greet
script:
- echo "Hello!"
rules:
- if: '$CI_COMMIT_TAG'
|
I am aware of the fact that one can run a GitLab pipeline only when a certain condition is satisfied, eg. only when the branch ismasteror only when a Git Tag is created.For example:stages:
- greet
greet_job:
stage: greet
only:
- master
script:
- echo "Hello!"This runs thegreet_jobjob only when the branch is calledmaster.I would like to combinetwoconditions with a logical and, ie. I would like to run a pipeline, say, only when the branch is calledmasteranda new Git Tag has been created. Is that possible?ADDEDHereI found a possible solution:- greet
greet_job:
stage: greet
only:
refs:
- master
- tags
variables:
- $CI_COMMIT_BRANCH == "master"
script:
- echo "Hello!"
|
And operator for GitLab `only` pipeline directive
|
Pipelines do not change data in-place; at each step, the data is modified and passed along, but the intermediate results are not saved (with a partial exception when thecacheparameter is set).That the logistic regression doesn't complain indicates that the imputation has in fact happened.y_predshouldn't have any missing values; if that's the case, please let us know and provide an example dataset.
|
I'm new to python and have been learning about pipelines from datacamp. I have been experimenting with some fifa data that has missing NaN values. I have tried to create a pipeline with the steps of imputing any missing data (replacing it with the mean) and then creating a logistic regression. I don't seem to get any errors in the output. However, when I print things such as print(x_train) and print(y_pred) the output still returns NaN values. Would that indicate that my Pipeline is not working and that the data was not correctly imputed as surely I should be seeing the mean values rather than NaN. Would appreciate if someone could answer the question in layman's terms as I am new to the topic.fif_data=pd.read_csv("fifa_draft_1.csv")
df_Foot_Dummy=pd.get_dummies(fif_data, drop_first=True)
imp=SimpleImputer(missing_values=np.nan, strategy="mean")
logreg=LogisticRegression()
x=df_Foot_Dummy["passing"].values.reshape(-1,1)
y=df_Foot_Dummy["preferred_foot_Right"]
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size = 0.2, random_state=42)
steps=[("imputation", imp),("logistic_regression",logreg)]
pipe=Pipeline(steps)
pipe.fit(x_train,y_train)
y_pred=pipe.predict(x_test)
print(x_train)
print(y_pred)
|
My pipeline not imputing values correctly?
|
You can create as many pipelines as long as you are not hitting the quotas of the resources used in the pipeline. For example your pipeline uses BigQuery, Compute Engine, etc. and one of these hit a quota, then you are not able to create a new pipeline. SeeData Fusion Quotas and limitsfor reference.
|
I can't find limit information about Cloud Data Fusion.Does anyone know, how many data pipelines can I create with Cloud Data Fusion by default? (link, source needed)
|
How many pipelines can I create within Cloud Data Fusion?
|
You should userulesinstead of only asonly/exceptare not in active development any more. And with rules your pipeline will look like this:stages:
- format
- test
formatter:
stage: format
script:
- echo ${CI_MERGE_REQUEST_TARGET_BRANCH_NAME}
- ancestor=$(git merge-base origin/${CI_MERGE_REQUEST_TARGET_BRANCH_NAME} HEAD)
- "do some formatting for "git-diff -name-only $ancestor HEAD"
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
unit_test:
stage: test
script:
- "do some test"
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
- if: '$CI_COMMIT_BRANCH'formatterstage is only run on merge-requests and theunit_teststage is run on merge-requests and commits on all branches.
|
stages:
- format
- test
formatter:
stage: format
only:
- merge_requests
script:
- echo ${CI_MERGE_REQUEST_TARGET_BRANCH_NAME}
- ancestor=$(git merge-base origin/${CI_MERGE_REQUEST_TARGET_BRANCH_NAME} HEAD)
- "do some formatting for "git-diff -name-only $ancestor HEAD"
unit_test:
stage: test
script:
- "do some test"For example, I have the above .gitlab-ci.yml.Right nowunit-test only runs for the regular commitformat only runs for MR.Ashttps://docs.gitlab.com/ee/ci/merge_request_pipelines/says,"If you use this feature with merge when pipeline succeeds, pipelines for merge requests take precedence over the other regular pipelines.".What I want is,Only "test" runs for each commit.Both "test" and "format" runs for MR, and merge can be approve only if both pipeline succeeds.How may I achieve this?Another pre-condition is that I want to use "CI_MERGE_REQUEST_TARGET_BRANCH_NAME" variable, which is only defined with only:[merge-requests] or any similar sorts.
|
Gitlab MR pipeline to run all regular pipelines
|
cross_val_scoreis meant for scoring a model by cross-validation, if you do:cross_val_score(clf_logreg, X_test, y_test,
scoring=make_scorer(f1_score, average='weighted'), cv=cv)You are putting redo-ing the cross validation on your test set, which does not make much sense except that you are now training your model on a smaller dataset, compared to your train.I think thehelp page on cross validationon scikit learn illustrates it, you don't need to rerun a cross validation on your test set:You just do:predicted_logreg= clf_logreg.predict(X_test)
f1 = f1_score(y_test, predicted_logreg)
|
I'm using training data set (i.e., X_train, y_train) when tuning the hyperparameters of my model. I need to use the test data set (i.e., X_test, y_test) as a final check, to make sure my model isn't biased.
I wrotefolds = 4
X_train, X_test, y_train, y_test = train_test_split(X,y, test_size=(1/folds), random_state=38, stratify=y)
clf_logreg = Pipeline(steps=[('preprocessor', preprocessing),
('model', LogisticRegression(solver='lbfgs', max_iter=100))])
cv = KFold(n_splits=(folds - 1))
scores_logreg = cross_val_score(clf_logreg, X_train, y_train, cv = cv)and, to get f1-score,cross_val_score(clf_logreg, X_train, y_train, scoring=make_scorer(f1_score, average='weighted'),
cv=cv)This returnsscores_logreg:[0.94422311, 0.99335548, 0.97209302]and for f1:[0.97201365, 0.9926906 , 0.98925453]For checking the test, is it right to writecross_val_score(clf_logreg, X_test, y_test, scoring=make_scorer(f1_score, average='weighted'), cv=cv) # not sure if it is ok to let cvor maybepredicted_logreg= clf_logreg.predict(X_test)
f1 = f1_score(y_test, predicted_logreg)Value returned are different.
|
Cross-validation and scores
|
You candisableConcurrentBuilds()in your pipeline. This will prevent it to run till the previous ones complete.
|
How to run Jenkins pipeline continuously?The pipeline is started and once it's finished it runs again and so onI tried */1 * * * * to run every minute but it's not waiting until the previous pipeline is finished but I need to wait until finished and only after it start running it again
|
Jenkins Pipeline Continiously
|
You should callacrossandencode_ordinalinsidemutate, as illustrated in the following example:dataset <- tibble(x = 1:3, y = c('a', 'b', 'b'), z = c('A', 'A', 'B'))
# # A tibble: 3 x 3
# x y z
# <int> <chr> <chr>
# 1 1 a A
# 2 2 b A
# 3 3 b B
dataset %>%
mutate(across(where(is.character), encode_ordinal))
# # A tibble: 3 x 3
# x y z
# <int> <dbl> <dbl>
# 1 1 1 1
# 2 2 2 1
# 3 3 2 2
|
I have a dataset with features of type character (not all are binary and one of them represents a region).In order to avoid having to use the function several times, I was trying to use a pipeline and across() to identify all of the columns of character type and encode them with the function created.encode_ordinal <- function(x, order = unique(x)) {
x <- as.numeric(factor(x, levels = order, exclude = NULL))
x
}
dataset <- dataset %>%
encode_ordinal(across(where(is.character)))However, it seems that I am not using across() correctly as I get the error:Error:across()must only be used inside dplyr verbs.I wonder if I am overcomplicating myself and there is an easier way of achieving this, i.e., identifying all of the features of character type and encode them.
|
R: Encoding categorical data using across()
|
This is a string representation of a JSON object, you should try to replaceset_field("winlogbeat_winlog_ita", to_string(winlogbeat_winlog_italiano));byset_field("winlogbeat_winlog_ita", to_string(winlogbeat_winlog_italiano.value));
/\
-------------------------------------------------------------------------|This should avoid storing a JSON object representation (we expect to see "Un account ha effettuato il logon con succcesso" inwinlogbeat_winlog_ita)However, this may not be your only issue, check that the field type is not "compound": this may occur if, in the past, you sent another data-type in this field for the current index.The best way to know if you are in this case is to click on "Fields" (in the sidebar, when searching), then, click on the fieldwinlogbeat_winlog_itaand see it the popup says "winlogbeat_winlog_ita = string" or if it shows mixed field types.If it is a compound value, you should rotate the active write Index, generate some logs, and search again (search from the date/time at which you performed the rotation to avoid taking old compound values into consideration)
|
I am using a pipeline connected to a .csv document, to create a new field in my windows logs on Graylog.
As you can see from the screenshot, I can see the field in every log, but when I click on"show top values"to create a new widget, Graylog doesn't show anything.I think this happens because the value in the field is not a string, in fact it's between curly brackets.
The problem is that I can't find a way to show these values in a widget. I tried changing my pipeline rule but I had no results.The following is one of the many attempts I made with the rule:rule "eventid_windows_rule"
when
has_field("winlogbeat_winlog_event_id")
then
let winlogbeat_winlog_italiano = lookup("eventid_widget_windows_lookup", ($message.winlogbeat_winlog_event_id));
set_field("winlogbeat_winlog_ita", to_string(winlogbeat_winlog_italiano));
endScreenshot:
|
Why this extracted Graylog field is not showing in my widget?
|
I "solved" my problem , I redirected output to my /tmp/ directory and then had zathura read from there . I also put it all in a script .#!/bin/sh
d=$(date +'%M_%S');
man -k . | fzf -e --tiebreak=begin | awk '{print $1}' | xargs man -Tpdf > /tmp/man_${d};
zathura /tmp/man_${d} 2> /dev/null &
|
man -k . | fzf -e --tiebreak=begin | awk '{print $1}' | xargs man -Tpdf | zathura -
# searches for a man page and then outputs it as pdf to zathuraThis is the command , it works great , other than zathura starting blank while it waits for stdin to give it input. It is really annoying having to change focus from zathura back to the terminal
and then back to zathura.I am fairly new to scripting so i though that there might be a way around this that i just don't know about.Thanks anyway!
|
Program starts before it's piped into
|
Installwebdriver-managerpackage and add to your requirements.txt file:pip install webdriver-managerand use like that:from selenium import webdriver
from webdriver_manager.firefox import GeckoDriverManager
driver = webdriver.Firefox(executable_path=GeckoDriverManager().install())with that package, you should worry about geckodriver anymore
|
I'm using this command to copy geckodriver to the path but I still encounter a problem- wget https://github.com/mozilla/geckodriver/releases/download/v0.29.0/geckodriver-v0.29.0-linux64.tar.gz
- echo "geckodriver downloaded successfully"
- tar -xvzf geckodriver*
- chmod +x geckodriver
- export PATH=$PATH:/usr/local/bin
ERROR :
==============================================================================
Firefox
==============================================================================
WebDriverException: Message: 'geckodriver' executable needs to be in PATH.
------------------------------------------------------------------------------
|
How to copy geckodriver to path /usr/local/bin/ in Gitlab pipeline?
|
An easy way to do it is to schedule bothBuild 1and stagesAandBon some node with a unique label. This way, if the Build 2 is running and already using that node, Build 1 will be waiting in the queue on the node.Another option to consider is usingLockable Resources plugin.
|
I have two builds:Build 1Pipeline Build 2 with stages: A -> B -> CIs it possible to block Build 1 till stage B is finished?Standard blocking plugin checks only names of other jobs.
|
How to block jenkins build till certain pipeline stage is finished
|
How can I check if I have any test cases failing within my test plan when executing my pipeline in azure devops?In Test Plan, the test case has no outcome state(e.g. fail pass ..). The outcome state is for Test points inTest Plan -> Execute tab.To check if there are failed test points in Test Plan, you could use PowerShell Task to run the Rest API:Test Point - Get PointsandTest Suites - Get Test Suites For Plan.$token = "PAT"
$url1 = " https://dev.azure.com/{OrganizationName}/{ProjectName}/_apis/test/Plans/{TestPlanId}/suites?api-version=5.0"
$token = [System.Convert]::ToBase64String([System.Text.Encoding]::ASCII.GetBytes(":$($token)"))
$response1 = Invoke-RestMethod -Uri $url1 -Headers @{Authorization = "Basic $token"} -Method Get -ContentType application/json
$i = 0
ForEach ($SuitId in $response1.value.id )
{
$url="https://dev.azure.com/{OrganizationName}/{ProjectName}/_apis/testplan/Plans/{TestPlanName}/Suites/$($SuitId)/TestPoint?api-version=6.0-preview.2"
$response = Invoke-RestMethod -Uri $url -Headers @{Authorization = "Basic $token"} -Method Get -ContentType application/json
ForEach( $outcome in $response.value.results.outcome )
{
echo $outcome
if($outcome -eq "failed")
{
$i ++
}
}
}
echo $i
if ($i -eq 0)
{
echo "No error"
}
else
{
echo "##vso[task.logissue type=warning]the error value is $i"
}Result:
|
How can I check if I have any test cases failing within my test plan when executing my pipeline in azure devops?
|
Check if my test plan has a failed test case in the pipeline execution
|
You can always use theincludetag to reuse code in gitlab-cihttps://docs.gitlab.com/ee/ci/yaml/includes.htmlAs a good example, I like the GitLab code pipeline itselfhttps://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab-ci.yml
|
I have several projects that have the same makefile targets;
let's say :make init,make compile,make report.for each repo project, I then have a quite complex pipeline script that defines several jobs (pseudo code here)stage: one
script:
make init + make compile
stage: two
script:
make init + make report
stage: three
script:
make init TEST=true
...The fact is that yaml script is the same and will be the same for all my projects, but it's quite painfull to maintain (for each new feature I have to update all repos)...I would like to maintain/modify only one file.Is there a simple solution to this?
|
Is there a way to make gitlab-ci pipeline not duplicate and easy to maintain?
|
Set these properties:Search Recursivelytotrue;File Filter Regexto*.xls;and leavePath Filter Regexto its default value.
|
Hello guys I'm trying to get all xls files from all folders in a ftp serveur and I'm using a regular expression in PATH FILTER regex but with no result and when I'm putting the absolute path like '/TEST/TEST/' I'm getting the files
Here's the confiuration :enter image description hereThanks in advance
|
Path filter regex in GETftp processor in apache nifi
|
Note: I am assuming your Jenkins server is running a Unix OS, since you don't specify the OS in your question.You can check if theCharts.tgzfile exists locally inchart_directorywith thefindcommand:find chart_directory -type f -name Charts.tgz-type ftellsfindto search for regular files, and-name Charts.tgztellsfindto search specifically forCharts.tgz.findwill have an exit code of zero if it finds nothing. To work around this, you can pipe toreadto get a non-zero exit code iffindfinds nothing. (readhas a non-zero exit code if it encounters anEOF, which is exactly what will happen iffinddoesn't find anything.)In your code, you can downloadCharts.tgzonly if it is not present inchart_directorylike this:find chart_directory -type f -name Charts.tgz | read || {
# Your code here (between the '{' and '}') to clone the git repo
# to retrieve Charts.tgz
}What the||operator does is the expression on the left side is only evaluated if the expression on the right side has a non-zero exit code. In this case, this means the code between the{and}will only be evaluated iffinddoesn't find anything.
|
I have a tar/zip file in a git repo. This tar/zip file needs to be put up in the directory named as 'chart_directory'. Using my Jenkinsfile, I want to implement the logic that it would download this tar/zip file from the git repo only if 'chart_directory' does not already contain a copy of the tar/zip file present in the git repo.
eg - There is a file named 'Charts.tgz' in a git repo named 'testing.git'
I need to download this 'Charts.tgz' file in 'chart_directory' only if 'chart_directory' does not already contain the file 'Charts.tgz'. If Charts.tgz is already present in 'chart_directory', then it should skip to download the file.
|
How can I check for the file that is present in the other directory and not in the workspace from a Jenkinsfile
|
You could also do the same in slightly different way using lambdaimport unidecode
from toolz import pipe
x = '9\xa0766'
pipe(
x,
unidecode.unidecode,
lambda x: x.replace(" ", ""),
int
)You could also make it into a function to reuse it elsewhereimport unidecode
from toolz import pipe, compose_left
x = '9\xa0766'
# this becomes a function that can be called
converter = compose_left(
unidecode.unidecode,
lambda x: x.replace(" ", ""),
int
)
result = converter(x)
print(result)
#9766
|
I have a string that I want to clean usingpipefrom toolz. But it seems I can't figure out how to apply thestr.replacemethod in the pipeline. Here is an example:x = '9\xa0766'
from toolz import pipe
import unidecode
# Can do like this, but don't want to use nesting.
int(unidecode.unidecode(x).replace(" ", ""))
# Wan't to try to do like this, with sequence of steps, applying transformations on previous output.
pipe(x,
unidecode.unidecode,
replace(" ", ""),
int
) # DON'T WORK!!
# NameError: name 'replace' is not definedMore generally, notice that the output ofunidecode.unidecodeis a string. So I want to be able to apply some string methods to it.pipe(x,
unidecode.unidecode,
type
) # strIs it possible with pipe?
|
Using toolz.pipe with methods
|
You can use the AWS CLI to start the pipeline manually, the same thing is possible via a API call.From:Start a pipeline manuallyaws codepipeline start-pipeline-execution --name MyFirstPipelineAlternatively you can add a review button within CodePipeline.From:Manage approval actions in CodePipelineIn AWS CodePipeline, you can add an approval action to a stage in a
pipeline at the point where you want the pipeline execution to stop so
that someone with the required AWS Identity and Access Management
permissions can approve or reject the action.
|
I had built an AWS CDK code pipeline that triggers build on every git commit, but I want to trigger the build throw client function can anyone please guide me on how can I trigger the start-pipeline function throw the client-side.
|
How do I start AWS Codepipeline from client?
|
ColumnTransformertakes the column in the same order they are defined in your dataframe, therefore you may consider obtaining them withpandas select_dtypesfrom your dataframe. Supposing your data is contained in a df:numeric_columns = list(df.select_dtypes('number'))
categorical_columns = list(df.select_dtypes('object')) + list(df.select_dtyes('category'))
|
preprocessor = ColumnTransformer(
[
('num', StandardScaler(), numeric_features),
('cat', OneHotEncoder(handle_unknown='ignore'), categorical_features)
]
)I want to perform transformations on both some numeric attributes and also on some categorical features.Running:test=preprocessor.fit_transform(X_train)return a numpy array, which does not have names of columns.According to documentation the ColumnTransformer should have function get_feature_names(),which would return the names of the new features. However when I run it I get:AttributeError: Transformer num (type StandardScaler) does not provide get_feature_names.I want to get the names of the columns dynamically because I don't know the number of categories in advance.
|
How to get the names of the new columns after performing sklearn Column Transformer
|
df1 %>% filter(col1 %in% df2$col2) %>% remove_rownames %>% column_to_rownames('col1')
|
I am trying to subset/filter a data frame according to the corresponding column elements from another data frame.
Here is what I used to do thisdf <- df1[df1$col1 %in% df2$col2,]And then I am going to set the column as row namesdf <- df %>% remove_rownames %>% column_to_rownames('col1')However I have no idea how to combine these two codes into one using%>%
|
How to subset a data frame with R pipeline
|
You can try the following syntax:build job: 'Darkweb', wait: falseThe wait: false token would allow the first job to finish without waiting for the completion of second job.
Let me know if it works!
|
I used this:post {
always {
sh "echo Jenkins Job is Done"
junit 'target/surefire-reports/*.xml'
echo 'Run darkWeb Test pipeline!'
build job: 'DarkWeb'
}and it works. The problem is that the original job continuing running while the second job (DarkWeb) is running too.I want the 'DarkWeb' job to run onlyafter the original job is completely finished
|
How to trigger a job when one job is finished
|
I thought about this, too, and although I think that scaling with the full data leaks some information from training data into validation data, I don't think it's that severe.One one side, you shuffle the data anyway, and you assume that the distributions in all sets are the same, and so you expect means and standard deviations to be the same. (Of course, this is only theoretic (law of large numbers).)On the other side, even if the means and stds are different, this difference will not be siginificant.In my optinion, yes, you might have some bias, but it should be negligible.
|
In machine learning, you split the data into training data and test data.In cross validation, you split the training data into training sets and validation set."And if scaling is required, at each iteration of the CV, the means and standard deviations of the training sets (not the entire training data) excluding the validation set are computed and used to scale the validation set, so that the scaling part never include information from the validation set. "My question is when I include scaling in the pipeline, at each CV iteration, is scaling computed from the smaller training sets (excluding validation set) or the entire training data (including validation set)? Because if it computes means and std from entire training data , then this will lead to estimation bias in the validation set.
|
Sklearn Pipeline: is there leakage /bias when including scaling in the pipeline?
|
Please add a paranthesis when intialising the transformer.preprocessor = ColumnTransformer(transformers = [('num', num_trans, num_feat)],remainder='passthrough')
|
I have a dataframe which has 4 numeric columns and I am trying to scale only one column usingStandardScalerin aPipeline. I used below code to scale and transform my column.num_feat = ['Quantity']
num_trans = Pipeline([('scale', StandardScaler())])
preprocessor = ColumnTransformer(transformers = ['num', num_trans, num_feat])
pipe = Pipeline([('preproc', preprocessor),
('rf', RandomForestRegressor(random_state = 0))
])After doing this I am splitting my data and training my model as below.y = df1['target']
x = df1.drop(['target','ID'], axis = 1)
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size = 0.2)
pipe.fit(x_train, y_train)This gives me errorValueError: not enough values to unpack (expected 3, got 1). I understand this could be because of other 3 numeric columns in my dataframe. So how do I concatenate scaled data to my remaining dataframe and train my model on whole data. Or is there any better way to do this.
|
Using StandardScaler on specific column in Pipeline and concatenate to original data
|
My reason for believing there may be is because both instructions are still trying to access $gp, and only after reaching for $gp is whatever constant added to reach the updated address. Would this be correct?No, as$gpis not changing in this code sequence.These hazards occur within the processor, and they have to do with the order the processor executes the microarchitectural operations of register read and register write. Since the pipeline spreads instructions out over several cycles (used by the pipeline stages), the read of a register and the write of a register (for a different instruction) can occur in some kind of overlapping manner.These hazards do not concern themselves with memory reads and writes — just register reads and writes.Alw $s6, 20($gp)has one register read,$gpand one register write,$s6.Asw $f3, 32($gp)has two register reads,$gp, and$f3, and no register writes.WAR — Write After Read — requires a register write for the latter instruction and a register read for the former. Instructions 1&4 do not meet the pattern because instruction 4 has no register writes (yes, it writes to memory and that has its own ordering issues, but the W in WAR refers to a register write, whichswdoesn't do).
|
Say we have the following set of instructions:1: lw $f6, 20($gp)2: lw $f2, 28($gp)3: mult $f0, $f2, $f44: sw $f3, 32($gp)I can see possibility of there being a WAR hazard between instructions 1 and 4 (likewise for 2 and 4), but am unsure.Basically wereadfrom $gp+20 and write into $f6, then read from $f3 andwriteinto $gp+32. Surely if both instances were dealing with $gp+20, there would be a WAR data hazard, but because the addresses ultimately being affected are different ($gp+20 vs. $gp+32), I'm unsure if there could still be a hazard. My reason for believing there may be is because both instructions are still trying to access $gp, and only after reaching for $gp is whatever constant added to reach the updated address. Would this be correct?Thanks
|
WAR Data hazard in MIPS-like pipeline architecture
|
I believe it is not possible exactly in the way you want, but you could work around by defining agents at the stage level:pipeline {
agent none
stages {
stage('A') {
when { /* some condition */ }
agent {
label "node_name"
}
steps {
sameCodeForBothStages()
}
}
stage('B') {
when { /* some condition */ }
agent {
kubernetes {
cloud 'cloudName'
namespace 'NameSpaceName'
label 'AgentLabel'
inheritFrom 'agent'
}
}
steps {
sameCodeForBothStages()
}
}
}
}
void sameCodeForBothStages() {
sh "echo 'Hello'"
}The obvious disadvantage is that two separate stages will show up in the pipeline view.To avoid duplicate code on both stages you could call a function like I did in the example.
|
How can i use multiple type of agents in a single declarative jenkinsfile.
like i have two labels.
1st of type simple labelagent {
label "node_name"
}2nd of of kubernetes type.agent {
kubernetes {
cloud 'cloudName'
namespace 'NameSpaceName'
label 'AgentLabel'
inheritFrom 'agent'
}
}And i want to select between these two based on a condition
like if some parameter is given, then run node agent , else run kubernetes agent.
|
How can i use multiple type of agents in a single declarative jenksinfile
|
Currently there is no way to set the Merge When Pipeline Succeeds option project-wide, it has to be done for each merge-request. However, Gitlab does support the use of git push-options to interact with some merge request settings. To create a merge request and set it to merge when the pipeline finishes, you can do:git push -o merge_request.create -o merge_request.target=my-target-branch -o merge_request.merge_when_pipeline_succeeds.You can also set a git alias to set any push options you need when you push:git config --global alias.push_merge "push -o merge_request.create -o merge_request.target=master -o merge_request.merge_when_pipeline_succeeds"and use it withgit push_merge my_branch.You can view all the available push options that Gitlab supports, and useful aliases in the docs here:https://docs.gitlab.com/ee/user/project/push_options.html
|
How can I perform the Gitlab Branch merge with the master automatically?Is anyone knows about it please help me with this development activity?
|
Automatic Merge Branch with Master on gitlab pipeline succeeds
|
I just recently had that issue too, what we did is to generate a CIToken which expires in a year. Here is the token type description:"This multi-use token specification is designed to be used with the Fortify continuous integration plugins that automatically upload an FPR to Software Security Center as part of the build process, and download vulnerability statistics for the application version being built."Not a permanent token but better than 24h expiration token.
|
Dear stackoverflow users and DevSecOps'ers,I went acrossFortify: How to get issue(vulnarability) list under a project using fortify rest apiwhile searching for a solution to my problem. But it doesn't help and addresses different aspect.Desire:I want to query Fortify API (or CLI) automatically in my development pipeline after each scan was performed to get list of issues (vulnerability) and fail builds if any issue is found.Problem:Fortify API accepts token, that expires in let's say 24h. In order to generate token I need user credentials. It's fine to log in manually and generate the token if I want to query API from my postman or console... but I want to have scan each code change that is hooked to my CI/CD tools, and if something found - break the build.I can't store my user credentials and use them in the pipeline, as it's inappropriate.I can't have a service account or anything, that is a non-real-user to log-in or access APINo options to have a permanent tokenWhat is your advice on how to tackle this?
|
Fortify: How to automate getting issues (vulnerability) list under a project using Fortify API/CLI, to break my pipeline if there are vulnerabilities
|
CodePipeline has a few options for the Source of the pipeline, for a Git style deployment there is:CodeCommitGitHubCodeStarSourceConnection (Bitbucket)These options will trigger the pipeline if a specific branch within your repository receives additional changes, taking an archive of the code at the latest changes.In addition if you require multiple branches you take a look at using theS3combined with some automation to push the code from any branch to S3 which would trigger the CodePipeline. Alternatively create multiple pipelines if you have a select few branches which must be deployed regularly.
|
I have an aws pipeline with code commit as it's repo.What I am trying to achieve is to change the repo to my existing git repo in the pipeline.
Is there a way to achieve it? The documents I am referring to shows how to mirror the contents of git repo to local and then push it to the code commit.That doesn't fit in with the use case.
I am aiming for something like git which would act as source control in the AWS pipeline and any commit in that will trigger a build.Could anyone suggest to me if we could achieve this? Or build a pipeline with git repo
|
Replacing CodeCommit with Git in existing aws pileline
|
A few weeks ago, self hosted runners was made available as a public beta. Here are the details:https://community.atlassian.com/t5/Bitbucket-Pipelines-articles/Bitbucket-Pipelines-Runners-is-now-in-open-beta/ba-p/1691022Additionally, if you're looking to retain some of your files from one build to the next to save doing the same work over and over again, have a look at caches:https://support.atlassian.com/bitbucket-cloud/docs/cache-dependencies/there are some built ones that you could use, but you can define your own custom ones as well. Essentially it's just a way of preserving the contents of a directory for a future build.
|
We are currently using bitbucket cloud to host our grails-app repository. We want to set up some pipelines to do things like run unit tests and make sure the app compiles before being able to merge a branch to master.I know this can pretty easily be done by letting them host the pipeline and committing a well written pipe file, however there is a problem standing that our app is very large, and even on brand new macbook pros takes 20 minutes to compile, on some older ones it can take 2 hours or more. Grails, thankfully, only compiles files that have changes in them from the last compilation. However, this can't be used on a bitbucket pipe that's working off a fresh pull of the app every time it runs.My solution to this was wanting to set up a pipeline to run for us internally so that it can already have the app pulled, and just switch to the desired branch and run from there. This still might take time if switching between 2 very diverged branches, but it's better than compiling from fresh every time.I can't seem to find any documentation on hosting a pipeline internally with bitbucket cloud, does anyone know if this is possible, and if so where there is documentation for it?It would also be acceptable to find a solution to the long compilation problem itself with bitbucket hosted pipelines.
|
Can you host a bitbucket pipeline internally?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.