Response
stringlengths 8
2k
| Instruction
stringlengths 18
2k
| Prompt
stringlengths 14
160
|
---|---|---|
UsingArtifact.trim()did not fix it. But usingArtrifact.substring(0, Artifact.length() - 1)instead did the trick.
|
I am trying to run a linux command in a jenkins pipeline using sh, but for some reason my command get's spited in 2 and tries to execute them sparely.
The result of the pipeline is:
curl --insecure -u ':' --upload-file ./file.ear
curl: no URL specified!
Please see the image.pipeline {
agent any
stages {
stage('use curl'){
steps{
script{
withCredentials([
usernamePassword(credentialsId: 'CREDENTIALS', usernameVariable: 'USER', passwordVariable: 'PWD')
]) {
sh(script:"curl --insecure -u ${USER}:${PWD} --upload-file ./" + Artifact + " " + REPO_URL + Artifact.substring(0, Artifact.length() - 5) + "/", returnStdout: false)
} //withCredentials
} // scripts
} //steps
} //stage
}
}
|
Jenkins - "sh" splits my command and tries to execute it separatly
|
Imo, the point here is the following. On the one side, the pipeline instancesmodel_lg,model_dtetc. are notexplicitely fitted(you're not calling method.fit()on them directly) and this prevents you from trying to access thecoef_attribute on the instances themselves.On the other side, by calling.cross_validate()with parameterreturn_estimator=True(which is possible with.cross_validate()only among the cross-validation methods), you can get thefitted estimatorsback for each cv split, but you should access them via your dictionariescv_results_lg,cv_results_dtetc (on the'estimator'key).Here's the reference in the codeand here's an example:from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
from sklearn.datasets import load_iris
from sklearn.model_selection import cross_validate
X, y = load_iris(return_X_y=True)
model_lg = Pipeline([("preprocessing", StandardScaler()), ("classifier", LogisticRegression())])
cv_results_lg = cross_validate(model_lg, X, y, cv=5, return_train_score=True, return_estimator=True)These would be - for instance - the results computed on the first fold.cv_results_lg['estimator'][0].named_steps['classifier'].coef_Useful insights on related topics might be found in:How to get feature importances of a multi-label classification problem?Get support and ranking attributes for RFE using Pipeline in Python 3
|
I have defined the following pipelines using scikit-learn:model_lg = Pipeline([("preprocessing", StandardScaler()), ("classifier", LogisticRegression())])
model_dt = Pipeline([("preprocessing", StandardScaler()), ("classifier", DecisionTreeClassifier())])
model_gb = Pipeline([("preprocessing", StandardScaler()), ("classifier", HistGradientBoostingClassifier())])Then I used cross validation to evaluate the performance of each model:cv_results_lg = cross_validate(model_lg, data, target, cv=5, return_train_score=True, return_estimator=True)
cv_results_dt = cross_validate(model_dt, data, target, cv=5, return_train_score=True, return_estimator=True)
cv_results_gb = cross_validate(model_gb, data, target, cv=5, return_train_score=True, return_estimator=True)When I try to inspect the feature importance for each model using thecoef_method, it gives me an attribution error:model_lg.steps[1][1].coef_
AttributeError: 'LogisticRegression' object has no attribute 'coef_'
model_dt.steps[1][1].coef_
AttributeError: 'DecisionTreeClassifier' object has no attribute 'coef_'
model_gb.steps[1][1].coef_
AttributeError: 'HistGradientBoostingClassifier' object has no attribute 'coef_'I was wondering, how I can fix this error? or is there any other approach to inspect the feature importance in each model?
|
Inspection of the feature importance in scikit-learn pipelines
|
@Emily Wong try see this linkhttps://stackoverflow.com/a/69660601/7505687I think you could work with multi-modules in gradle and buildMainAppand gradle will build and includeTestAPImodule naturally.Note: check ifMainAppare including all dependencies into final jar ( gradle modules and other implementation dependencies )Eg. if you need create aMainApp.jarwith all dependencies as a fatJar try to add this task inMainApp/build.gradlejar {
// if you have main class
manifest {
attributes "Main-Class": "${mainClassName}"
}
duplicatesStrategy = DuplicatesStrategy.EXCLUDE
from { configurations.compileClasspath.collect { it.isDirectory() ? it : zipTree(it) } }
}
|
I am not sure if this is duplicate but i couldnt find similar question.I have a build.gradle file in the MainApp. which has dependencies on another project. Currently look like the below.dependencies {
compile project(':TestAPI')
...
}the TestAPI itself is another project, and it includes its own build.gradle file. How Can i make the MainApp's build.gradle to call the TestAPI build.gradle first and then use the output jar file as its own dependency?You might ask why I want to do this. because each of these projects are an individual repository in the gitlab, and I do have CICD pipeline, when I trigger the MainApp pipeline, I want it to compile the TestAPP pipeline first and use its jar as dependency and then proceed with MainApp pipeline.Any hints or suggestion is highly appreciate.
|
gradle script to build another gradle first
|
The key here is checking which value is containedinthe other dataset. Basically a conditional operation comparing between the two vectors of IDs, in this case can be easily solved using%in%:Data:# Dataset with 5 letters and values
dat <- data.frame(
id = LETTERS[1:5],
val = 1:5
)
# Subset
minidat <- dat[4,]Base Rnew_dat <- dat # Or modify in place
new_dat$is_in_smaller <- ifelse(dat$id %in% minidat$id, "yes", "no")
new_dat
## id val is_in_smaller
## 1 A 1 no
## 2 B 2 no
## 3 C 3 no
## 4 D 4 yes
## 5 E 5 no{dplyr}approach, identical outputlibrary(dplyr)
new_dat2 <- dat %>%
mutate(is_in_smaller = ifelse(id %in% minidat$id, "yes", "no")){data.table}library(data.table)
new_dat3 <- as.data.table(dat) # Assuming you already have a data.table object
new_dat3[, is_in_smaller := ifelse(id %in% minidat$id, "yes", "no")]
|
I am trying to create a new column in a dataset. I want this column to be a "yes" or "no" column. Let's say that I have one dataset that has 1000 rows including a unique ID and another dataset that has 200 rows including a unique ID. The unique ID's match between the datasets because both datasets are from the same database.I want to create a column in the larger dataset based on a search criteria, I want to search the unique IDs in the larger dataset and the new column will say "yes" for any of the unique IDs that also belong in the unique ID column in the smaller dataset. Basically if the ID is found in the small and the large dataset it will say Yes and if not then No.Example:This is what I want. Except in my case the 2 columns will be in different datasets.I've tried to do this in R and even in Excel. I've tried merging the 2 datasets by the ID column but that doesn't get me what I want, which is a new column "yes" or "no" if the ID is found in both datasets. What should I do? I think I can use the %>% to solve my problem but I'm lost where to start..
|
In R, search for several unique IDs from one dataset in another
|
Triggers are treated as 2 separate triggers - they combine asORnotAND. So in your case, it's either atag pushorworkflow_run completed.There is no way to filter outworkflow_runtrigger by tag I'm afraid.How I would solve this is to trigger your Semantic release directly from yourTestworkflow using workflow dispatch event:- name: Invoke workflow in another repo with inputs
uses: benc-uk/workflow-dispatch@v1
with:
workflow: Semantic Release
repo: yourcompany/yourrepoYou have to make your Semantic Release to be triggered byworkflow_dispatchfirst to make it work:on:
workflow_dispatch
|
I want to trigger a release github action only when these two conditions are true:I push a new tagThe test github action (unit tests) was successfully runThis is what I tried:name: Semantic Release
on:
push:
tags:
- 'v*'
workflow_run:
workflows:
- "test" # basically i have another test action to run unit tests
types:
- completed
jobs:
release:
name: "Release on Pypi"
runs-on: ubuntu-latest
concurrency: release
steps:
- uses: actions/checkout@v2
with:
fetch-depth: 0
- name: Python Semantic Release
uses: relekang/python-semantic-release@master
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
repository_username: __token__
repository_password: ${{ secrets.PYPI_TOKEN }}I set my test action to trigger when I push to master, which is working fine. However, the release action is always triggered after the test action even when I do not push a new tag.I want the release action to trigger only when I push a new tag and wait for the test action to complete successfully. Why is my example is not working in this case?
|
How to trigger a github action only when a tag is pushed and after a workflow is completed
|
It might be an indentation issue. The following indentation should work.pipelines:
custom:
dbt-run:
- step:
name: 'Validate'
script:
- cd dbt_4flow
- dbt compile
condition:
changesets:
includePaths:
- "dbt_4flow/*"
|
I created a bitbucket pipelines that looks like this:pipelines:
custom:
dbt-run:
- step:
name: 'Validate'
script:
- cd dbt_4flow
- dbt compile
condition:
changesets:
includePaths:
- "dbt_4flow/*"However, when I try to run it via the UI, I get this error even though I have already given a condition. What am I doing wrong? Is it the syntax?There is an error in your bitbucket-pipelines.yml at [pipelines > custom > dbt-run > 0 > step > condition]. To be precise: At least one condition is required
|
error in bitbucket-pipelines.yml at [pipelines > step > condition]. To be precise: At least one condition is required
|
If both the models need to be jointly optimized, you could run a SageMaker HPO job in script mode and define both the models in the script. Or you could run two HPO jobs, optimize each model, and then create the Pipeline Model. There is no native support for doing an HPO job on a PipelineModel.I work at AWS and my opinions are my own.
|
We want to tune a SageMakerPipelineModelwith aHyperparameterTuner(or something similar) whereseveralcomponents of the pipeline have associated hyperparameters. Both components in our case are realized via SageMaker containers for ML algorithms.model = PipelineModel(..., models = [ our_model, xgb_model ])
deploy = Estimator(image_uri = model, ...)
...
tuner = HyperparameterTuner(deply, .... tune_parameters, ....)
tuner.fit(...)Now, there is of course the problem how to distribute thetune_parametersto the pipeline steps during the tuning.In scikit-learn this is achieved by specially naming the tuning parameters<StepName>__<ParameterName>.I don't see a way to achieve something similar with SageMaker, though. Also, search of the two keywords brings up the same questionherebut is not really what we want to do.Any suggestion how to achieve this?
|
SageMaker: PipelineModel and Hyperparameter Tuning
|
ReadFromTextalways reads text files one line at a time; if your json objects are split across lines you'll have to do a different kind of read.One options is to read each file in its entirety in a DoFn, e.g.with beam.Pipeline() as p:
readable_files = (
p
| beam.Create([...set of files to read...]). # or use fileio.MatchAll
| fileio.ReadMatches)
file_contents = paths | beam.ParDo(ReadFileDoFn())whereReadFileDoFncould use the same underlying libraries thatReadFromTextdoes, e.g.class ReadFileDoFn(beam.DoFn):
def process(self, readable_file):
with readable_file.open() as handle:
yield handle.read()This will result in a PCollection whose elements are the entire contents of each file. Now to split up your text file into individual json objects, you can do something likedef text_blob_to_json_objects(text):
# Turns a concatenated set of objects like '{...} {...}' into
# a single json array '[{...}, {...}]'.
as_json_array = '[%s]' % re.sub(r'}\S*{', '},{', text, re.M)
# Returns the parsed array.
return json.loads(as_json_array)
file_contents | beam.FlatMap(text_blob_to_json_objects)You can then follow this with amulti-output DoFnto separate out the various types.
|
I'm using Apache beam with Python and I have a ppl file which looks something like this:FileExample.ppl:{"name":"Julio"}
{"name":"Angel", "Age":35}
{"name":"Maria","cellphone":NULL}
{"name":NULL,"cellphone":"3451-14-12"}etc...I need to split the file not for each line but for each json (in the real file the jsons are not only of one line but multiple and undefined amount of lines).And then I need to validate the content of each json (because in the file there are 6 types of jsons, the ones that have all the keys with a value, the ones that don't, etc.). After that I need different pcollection for each type of json. I'm thinking about using beam.flatmap() to achieve this last step but first I need to have something like this:jsons = pipeline | "splitElements" >> ReadFromText(file)Thank you in advance, Keep in mind that I am new to this.
|
How do I split a file of json elements in Apache Beam
|
Thetouchprogram does not read its standard input or normally write to its standard output. It just updates the access time of the file, creating it (as a zero length file) if necessary. If you provide standard input to it, it'll just ignore it.For the purposes of learning about working with pipelines, you might instead prefer to use theteeprogram.Although you haven't got there yet, with a bidirectional pipe (such as those made when you open them with thew+mode) you need to be careful because of the potential for deadlock when both sides fill up their OS-level write buffers in the pipe and aren't reading from them. It's usually wisest to switch bidirectional pipes to non-blocking in Tcl. It probably is also a good idea to set them as no stricter than line buffered, or maybe even unbuffered. Or to useflushat the right times, but that's harder to get right.fconfigure $f1 -blocking false -buffering noneThese are fundamental issues caused by needing to have several programs work together, not by pipes themselves or the programming languages used; it's just that bidirectional pipelines are the place where programmers usually first observe them. They very much also apply to working with TCP sockets.
|
I was trying to understand the usage of command pipeline with the open command in TCL.
I read the following paragraph from the documentation :" If write-only access is used (e.g. access is w), then standard output for the pipeline is directed to the current standard output unless overridden by the command. If read-only access is used (e.g. access is r), standard input for the pipeline is taken from the current standard input unless overridden by the command. "I was unable to understand what it means , so i tried some code which was :set f1 [open "| touch testFile.txt" w+]
set a {USA UK AUS IND JAP}
foreach country $a {
puts $f1 "Member of democratic alliance : $country"
}
close $f1But when i checked the contents of file, there was nothing present.
Can somebody please explain that paragraph from the TCL documentation (with some exaples) and also point where am I doing mistake in my ownThanks
|
Use of command pipeline with open command
|
I believe this is what you're looking for, query the user'sMemberOfattribute and for each group, query the group'sInfoandDescriptionattributes (I've also addedNameso you have that reference which I believe is important to have):(Get-ADUser "username" -Properties MemberOf).MemberOf |
Get-ADGroup -Properties Name, Info, Description |
Select-Object Name, Info, Description
|
I would like to find an AD group user's group's that their in and to find what those different group's notes are.Right now, I'm trying thisGet-ADPrincipalGroupMembership "username" | Get-ADUser -Properties info,descriptionWhich is giving errors. I know there must be an easy way to do this that I'm missing.
|
Get AD Group Member Groups and Find Those Group Notes in One Query
|
You would be better off using spark + spark cassandra connector combination to do this task. With Spark you can do joins in memory and write the data back to Cassandra or any text file.
|
I have an application which is using Cassandra as a database.I need to create some kind of reports from the Cassanbdra DB data, but data is not modelled as per report queries. So one report may have data scattered in multiple tables. As Cassandra doesn't allow joins like RDBMS, this is not simple to do.So I am thinking of a solution to get the required tables data in some other DB (RDBMS or Mongo) in real time and then genereate the report from there. So do we have any standard way to get the data from Cassandra to other DBs (Mongo or RDBMS) in realtime i.e. whenever an insert/update/delete happens in Cassandra same has to eb updated in destination DB. Any example programe or code would be very helpful.
|
Getting data from Cassandra tables to MongoDB/RDBMS in realtime
|
As examples show you need to use httpPOSTmethodcurl -X POST $WEBHOOK_URLIf you are pasting the URL directly to your browser it will use httpGETand result in 404 error.
|
I've created a pipeline trigger in Gitlab as the documentation said, but when I open it I get a "error": "404 Not Found".Webhook URL:https://gitlab.com/api/v4/projects/xxxx/ref/xxxx/trigger/pipeline?token=xxxxxxxx is being replaced by the values I have. Tried different things, setting the project to public. Enabling/disabling Limit CI_JOB_TOKEN acces.I'm a bit lost right now.
|
Gitlab pipeline trigger gives 404 when pasting webhook URL in the browser
|
Azure DevOps doesn't provide triggers for this level of control. You should control this not with your build server, but with your source control. You should prevent certain stages/jobs/tasks from running by using branch filters. I always build PRs and make them part for the approval process. Waiting for approvals is antithetical to DevOps practices.Always build on PRs, but only save images when you have merged the PR.
|
I have a pipeline that builds a docker image & push to ACR. My Requirement is to prevent the pipeline Build policy from triggering until two people approve the pull Request.Currently when the pull request is created the build automatically start running, without waiting for pull request to be approved.
|
Prevent Azure Devops CI Pipeline From Triggering until Two People Approve the Pull Request
|
It seems like you're looking for thereverse.dep.*pattern, which is best described in theofficial documentation.Quoting the docs:It is possible to redefine build parameters in the snapshot-dependency builds when the current build starts. For example, build configuration A depends on B and B depends on C; on triggering, A can change any parameter used in B or C.It looks like this is your case:To change a parameter in all dependencies at once, use a wildcard:reverse.dep.*.<property_name>Anyway, I would encourage you to read the whole article to get thorough understanding of the subject and choose the most suitable option.
|
I just started with the TeamCity CI server. I have 2 buildsAPI-TestsUI-TestsBoth these builds run in parallel whereas both the builds will have a dropdown config parameter with choices(Regression, Sanity)I have a build nameReleasewith a similar dropdown config parameter with choices(Regression, Sanity) and this build depends on bothAPI-TestsandUI-Tests. The buildReleasewill have to trigger manually by choosing the dropdown parameter(Regression, Sanity).I want to pass the option chosen in theReleasebuild to bothAPI-TestsandUI-Testsbuilds. I can't use %dep.*%, sinceReleasebuild depends onAPI-TestsandUI-Testsbuilds.I have attached the build chain for reference. Please guide me to fix the requirement or suggest at least a workaround.Sample Build Chain
|
Passing/Initializing Parameter from Last Build in TeamCity Build Chain
|
This should workwith self.output().open('w') as csv:
csv.write(df.to_csv())
|
I have a basic Luigi pipeline that I'm writing. The pipeline will download Apple stock data and create a CSV out of it. The following is what I've written:# Download Apple data
class DownloadSymbol(luigi.Task):
symbol = luigi.Parameter()
def output(self):
return luigi.LocalTarget(f'data/{symbol.lower()}_data.csv')
def run(self):
df = download_symbol(self.symbol)
with self.output().open('w') as csv:
df.to_csv(csv)I wrote the context manager based onthis article. But when I run this pipeline, I receiveTypeError: write() argument must be str, not bytes. I've tried changing the context manager to be awb-type write, but I receive the same error.How do I correctly utilizepd.DataFrame.to_csv()in a Luigi pipeline?
|
Writing a Luigi Target as a csv with Pandas
|
If you use scoring='recall' , it callssklearn.metrics.recall_scorewith the default,average = "binary". One way is to create a custom scoring:from sklearn.datasets import load_iris
from sklearn.tree import DecisionTreeClassifier
from imblearn.over_sampling import SMOTE
from imblearn.pipeline import make_pipeline
from sklearn.model_selection import cross_val_score
from sklearn.metrics import recall_score
from sklearn.metrics import make_scorer
X,y = load_iris(return_X_y=True)
imba_pipeline = make_pipeline(SMOTE(random_state=42),
DecisionTreeClassifier())
multi_recall = make_scorer(recall_score, average="micro")
cross_val_score(imba_pipeline, X, y, scoring=multi_recall, cv=5)
array([0.96666667, 0.96666667, 0.9 , 0.93333333, 1. ])
|
imba_pipeline = make_pipeline(SMOTE(random_state=42),
DecisionTreeClassifier())
cross_val_score(imba_pipeline, X_train1, y_train1, scoring='recall', cv=kf)When I run this, it gives below error:ValueError: Target is multiclass but average='binary'. Please choose another average setting, one of [None, 'micro', 'macro', 'weighted'].Out[102]:
array([nan, nan, nan, nan, nan])
|
How to revise this code for implementing Smote Oversampling and Cross Validation pipeline to multiclass classification problem?
|
In such cases where you are able to successfully run the command directly but it's not working when using variables. Checking a few cases usually helps.Echo the variable and check the values and orderCheck if the variable is protected. Protected variables are only accessible in protected branches.
|
I has been reainding some question and articles before make this question:gitlab ci pipeline failed deploy ftpUse GitLab CI to deploy app with ftpLFTP in gitlab CI: files are not updated on FTP server even if they are changed in the last commitUse Gitlab Pipeline to push data to ftpserverhttps://savjee.be/2019/04/gitlab-ci-deploy-to-ftp-with-lftp/My problem is that I use the pipeline variables I can not login.$ lftp -e "set ssl:verify-certificate false; mirror --reverse --verbose=3 --delete ./ ./ --ignore-time --parallel=10 --exclude-glob .git* --exclude .git/; quit" -u $FTP_USER,$FTP_PASS $FTP_HOST
mirror: Login failed: 530 Login authentication failed
Cleaning up project directory and file based variablesBut if I add the varaible value in the yml it works. Cpanels fpt user is build in this way:user@domianI don't know if this can be the problem when it's in a variable.$ lftp -e "set ssl:verify-certificate false; mirror --reverse --verbose=3 --delete ./ ./ --ignore-time --parallel=10 --exclude-glob .git* --exclude .git/; quit" -u user@domian,password domain
|
Deploy a proyect with Gitlab cd/ci and lftp in a server with cpanels
|
Basically, don't.All (or at least most) sklearn classifiers will encode internally, and produce more useful information for you when they've been trained directly on the "real" target values. (E.g.predictwill give the actual target values without you having to decode the mapping.)(As forregression, if the target is actually ordinal in nature, you may be able to useTransformedTargetRegressor. Whether this makes sense probably depends on the model type.)
|
I am developing a classification base model. I have used the concept of ColumnTransformer and Pipeline for feature engineering and selection, model selection, and for everything. I wanted to encode my categorical target (dependent) variable to numeric inside the pipeline. Came to know that we cannot use LabelEncoder inside both CT and Pipeline because the fit only takes (y) and throws an error, 'TypeError: fit_transform() takes 2 positional arguments but 3 were given.' What are other alternatives for thetarget variable? Found a lot of stacks for similar but for features and recommendations were to use OHE and OrdinalEncoder!
|
Alternatives of LabelEncoder() for target variable while implementing in a pipeline
|
You can setup Liquibase in a couple of different ways:You can use Liquibase Docker image in your Azure pipeline. You can find more information about using Liquibase Docker image here:https://docs.liquibase.com/workflows/liquibase-community/using-liquibase-and-docker.htmlYou can install Liquibase on Azure agent and ensure that all Liquibase jobs run on that specific agent where Liquibase is installed. Liquibase releases can be downloaded from:https://github.com/liquibase/liquibase/releasesThe URL you point to shows that Liquibase commands are invoked from C:\apps\Liquibase directory.
|
I am creating an Azure pipeline for the first time in my life (and a pipeline too) and there are some basic concepts that I don't understand.First of all I have trouble understanding how the installation works, if my .yaml file installs Liquibase, will Liquibase installation run every time the pipeline is triggered? (by pushing on github)Also, I don't know how to run liquibase commands from the agent, I seeherethat they use the liquibase bat file, I guess you have to download the zip from the Liquibase website and put it in the agent, but how do you do that?
|
How to install Liquibase in a self hosted Windows agent ? (Azure Devops Pipeline)
|
You are missing 2 spaces on thechanges:part.
It should work when you add those. (see the example below)And in case you want to combine the 2 rules with the same conditions, you can combine them into one rule:Example Tests:
stage: Example
tags:
- aws-medium-runner
script:
- MY_SCRIPT
rules:
- if: '$CI_PIPELINE_SOURCE == "push" || $SCHEDULE_A == "true"'
changes: # Only run on pushes if changes have been made to certain directories
- Test Cases/Example/*
- Object Repository/Example/*
- Test Suites/Example/*
- Scripts/Example/*
when: always
dependencies:
- Set Release Version
|
I have a GitLab-CI pipeline in place with my Katalon Studio automation tests that I would like to have the following functionality:Various nightly schedules that run based on scheduled variables being present.Changes declaration so pushes to the repo only trigger a pipeline run if certain files have been touched.I have the scheduled portion running as expected, but I am struggling on pairing that with the 'changes' declaration to only run the pipeline IF someone pushes after changing certain files. Can someone help? I am guessing this is an issue with my YAML formatting.Here is an example snippet from my current GitLab-CI.yamlExample Tests:
stage: Example
tags:
- aws-medium-runner
script:
- MY_SCRIPT
rules:
- if: $SCHEDULE_A == "true" # tied to schedule A in scheduler tool
when: always
- if: '$CI_PIPELINE_SOURCE == "push"'
changes: # Only run on pushes if changes have been made to certain directories
- Test\ Cases/Example/*
- Object\ Repository/Example/*
- Test\ Suites/Example/*
- Scripts/Example/*
when: always
dependencies:
- Set Release Version
|
Stuggling to Create GitLab-CI Pipeline that includes 'schedules' and 'changes' declarations
|
Since the standard scaler and KNN imputer (mean from n-nearest neighbors) are linear operations, runningstandardizer >> imputer >> inverse_standardizerproduces the same results asimputeralone.You can simplify your numeric pipeline as follows:pipeline_num = Pipeline([
("imputer", imputer),
# Add other processing steps here
])Here's "proof" that the imputer operation alone produces the same results:df1 = ss.fit_transform(df_example[list_numeric_vars])
df1 = imputer.fit_transform(df1)
df1 = ss.inverse_transform(df1)
print(f'Scale/Impute/Inverse-Scale:\n{df1}\n')
df2 = imputer.fit_transform(df_example[list_numeric_vars])
print(f'Impute Only:\n{df2}\n')Here's the output:Scale/Impute/Inverse-Scale:
[[1. 4. ]
[2. 6. ]
[3. 6. ]
[3.33333333 5. ]
[6. 3. ]
[6. 8. ]
[9. 2. ]
[4. 8. ]
[5. 3. ]]
Impute Only:
[[1. 4. ]
[2. 6. ]
[3. 6. ]
[3.33333333 5. ]
[6. 3. ]
[6. 8. ]
[9. 2. ]
[4. 8. ]
[5. 3. ]]
|
I am trying to apply standardization followed by imputation using KNN. Then I want to back transform the values, because I will apply some other transforms that require the original data. Is it possible to do this in scikit-learn pipeline? No matter what I tried, I get an error.Note: The inverse transform should be done within the pipeline, not when the pipeline has finished.import pandas as pd
import numpy as np
from sklearn.preprocessing import StandardScaler, OneHotEncoder, FunctionTransformer
from sklearn.impute import KNNImputer
from sklearn.pipeline import Pipeline
from sklearn.compose import ColumnTransformer
ss = StandardScaler()
imputer = KNNImputer(n_neighbors=3, add_indicator=False)
ohe = OneHotEncoder()
df_example = pd.DataFrame(data={"num1":[1, 2, 3, np.nan, 6, 6, 9, 4, 5],
"num2":[4, np.nan, 6, 5, 3, 8, 2, 8, 3],
"cat1":['A', 'B', 'C', 'A', 'B', 'C', 'A', 'A', 'B']})
list_numeric_vars = ["num1", "num2"]
list_cat_vars = ["cat1"]
pipeline_num = Pipeline([
("standardizer", ss),
("imputer", imputer),
("standardizer_inverse", FunctionTransformer(ss.inverse_transform))
])
pipeline_cat = Pipeline([
("ohe", ohe),
])
ct = ColumnTransformer(
transformers =
[
("pipeline_num", pipeline_num, list_numeric_vars),
("pipeline_cat", pipeline_cat, list_cat_vars)
],
remainder ="drop"
)
ct.fit(df_example) # Error
|
Inverse scaler transform within sklearn pipeline
|
Try mentioning the format like this@concat('Test_',formatDateTime(convertTimeZone(utcnow(),'UTC','New Zealand Standard Time'),'yyyy-MM-ddTHHmmss'), '.csv')
|
In Azure Data Factory, I am preparing a pipeline for some data that I could like to run. I have a dynamic content script for the output of a Copy Data activity (attached are visualised data factory and output file).QUESTION: When I add the timezone script, why does the output file in the blob storage show random numbers at the end of the file (i.e. the "7539827")? What can I do to remove this?Following are the scripts for the original dynamic content and the script with the date.Original Dynamic Content script@concat(' _',pipeline().parameters.SliceStart,"_')Script with Timezone@concat('Test_',convertTimeZone(utcnow(),'UTC','New Zealand Standard Time'),'_')Images:visualised data factory,csv output in blob storage
|
Azure Data Factory - Add current time & random numbers
|
You could encode the .ENV into a base64 string and set it as an repository variable on the repo.1. Convert .ENV into base64The following command(s) will convert a file into base64, strip all newline characters and also copy it to your clipboard.Linuxopenssl base64 < .ENV | tr -d '\n' | xclip -selection clipboardMacopenssl base64 < .ENV | tr -d '\n' | pbcopyNote: For the linux command, you will need to install thexclippackage.Unfortunately I don't develop on a windows machine so I cannot provide the windows version of the script.2. Add the base64'd ENV to your repository variablesRepository settings -> Repository VariablesMake sure thesecuredbox is ticked as well.3. Decode the base64 value into a .ENV in your pipelineThis can be done with something like this:script:
- echo $ENV_FILE_B64 | base64 -di > .ENV
- ./gradlew zipNameThebase64 -dcommand will decode the contents of $ENV_FILE_B64 from base64 and dump it into a file called.ENV. Theiflag ignores garbage characters when decoding.
|
I am working on a Bitbucket pipeline where a zip file gets created using environment variables that are set through a .env file. Before this zip file gets generated usually a .env file is configured with variables set used to create a zip file. The zip file gets generated utilizing a Gradle wrapper (gradlew). I'm a little unsure how to set these variables in the .env file. I know Bitbucket has the option to use repository variables in order to set these values, but I am unsure if this is best practice for automating this process. Any advice on this would be appreciated. Below is how the the environment variables currently get set and how the zip is created with gradlew..ENVNETWORK_NAME=network
GROVE_ZIP_PATH=./build
GROVE_ZIP_FILENAME=zipName.zip
# Configuration
HOST=host.docker.internal
PORT_MAIN=8070
PORT_SEARCH=8074Create Zip.\gradlew zipNameThe ENV file needs to be configured before the zip file can be created. Any advice on how to do this in a pipeline would help.
|
Bitbucket pipeline for build Gradle
|
I am unable to fully test your codes because of missing data. However, you may be able to adoptFunctionTransfomeras follows:Code:def CustomMultiplier(arrs):
a = arrs[:,0]
b = np.prod(arrs[:,1:], axis=1)
return np.column_stack((a, b))
if __name__ == '__main__':
transformer = FunctionTransformer(CustomMultiplier)
X = np.array([[1,3,4], [2,4,5]])
result = transformer.transform(X)
print(result)Result:[[ 1 12]
[ 2 20]]
|
I wanted to know how can I insert into a sklearn pipeline one step which multiplies two columns values and delete the original ones.I'm doing something like that.After loading the Dataframe, I multiply the target columns and delete them.Prepare X, Y, training set and test set.Configure pipeline with StandardScaler and some ML method (for example Linear Regression)Fit and predict.import pandas as pd, numpy as np
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LinearRegression
from sklearn.pipeline import Pipeline
# df is a pandas dataframe with columns A, B, C, Y
df['BC']=df['B']*te['C']
df.drop(columns=['B','C'], inplace=True)
X = df.loc[:,['A','BC']]
Y = df['Y']
x_train, x_test, y_train, y_test = train_test_split(X,Y,train_size=0.8)
pipe = Pipeline([
('minmax',StandardScaler()),
('linear',LinearRegression())
])
pipe.fit(x_train,y_train)
y_pred = pipe.predict(x_test)With this approach, when I want to make some prediction of new data, I must pass the multiplication, for example A=1, B=3, C=4print(pipe.predict(np.array([[1,12]])))And I want an approach likeprint(pipe.predict(np.array([[1,3,4]])))What I want, is modify pipeline for something likepipe = Pipeline([
('product', CustomFunction(columns_to_multiply, result_name_column)),
('minmax',StandardScaler()),
('linear',LinearRegression())
])Is it possible with scikit-learn or custom functions? How?
|
Create sklearn pipeline with column operations step
|
I would just run a for loop in such situations:probab = []
a = [{0: 0.47260814905166626, 1: 0.5273918509483337}]
for x in a:
probab.append(x.get(1))probabilty is stored inprobab:print(probab)[0.5273918509483337]
|
I have a pipeline which takes in TFIDF vectorizer and GBM binary classifier and gives me the label and probability. In production, I dont want the label, I just want the probability of 1 coming out of the pipeline. Can I make changes to the pipeline to get just the probability of 1 instead of label and probability of 0 and 1.gbm_pipeline = Pipeline([('tfidf', TfidfVectorizer(use_idf=True)),
('gbm',GradientBoostingClassifier(random_state = 23)),
])When I will use this pipeline to predict, it will give me out something likepredict [1]
predict_proba [{0: 0.47260814905166626, 1: 0.5273918509483337}]whereas I just want it to be0.5273918509483337PS: I cannot make use of the Pipeline's output. I wish make the changes in the pipeline itself so that instead of getting label and probability, I just get probability of 1
|
Get the probabilities out of a sklearn pipeline using GBM
|
After a lot of search I discovered a very strange solution.Had to remove the space before the list= []in thelayersdefinition.|
|
v
mlpc = MultilayerPerceptronClassifier(layers=[200, 30, 10],\
seed=1234,\
featuresCol="features",\
labelCols="label")
|
I am having a problem running a prediction using a saved MultiLayerPerceptronClassifier model.# reading the saved model
# spark version: version 3.1.2, python3.6
from pyspark.ml import PipelineModel
from pyspark.ml import Pipeline
saved_model = "/home/user/Desktop/algorithms/mlpc_model_8979"
read_model = PipelineModel.load(saved_model)# predictions using the read model
pred = read_model.transform(df)It throws error:Py4JJavaError: An error occurred while calling o98.transform.
: java.util.NoSuchElementException: Failed to find a default value for layersThe original mlpc in the pipeline had layers defined:mlpc = MultilayerPerceptronClassifier(layers= [200, 30, 10],\
seed=1234,\
featuresCol="features",\
labelCols="label")My attempts to solve it: If I run the pipeline model and do predictions without first saving the model. I works with no error. But saving and re-using the model throws this error.
Any help on how to solve this "Failed to find a default value for layers" error?
|
NoSuchElementException: Failed to find a default value for layers in MultiLayerPerceptronClassifier
|
This should be part ofGitLab API paginationper_pageNumber of items to list per page (default: 20, max: 100)If the GitLab pipeline page uses its own API, it would display up to 100 row par page.
|
It was experimentally found that not all jobs are displayed on Gitlab pipeline's page. But I can't find any mention of this in the documentation. Only 100 "rows" are shown.
|
Is there any limit on the number of displayed jobs on pipeline's details page? (gitlab)
|
Try this,db.show.aggregate([
{
"$lookup": {
"from": "episode",
"let": {
"episodeId": "$_id"
},
"pipeline": [
{
"$match": {
"$expr": {
"$eq": [
"$show_id",
"$$episodeId"
]
}
}
},
{
"$sort": {
"pubDate": -1
}
},
{
$limit: 1
}
],
"as": "season"
}
}
])You would get the out put as[
{
"_id": 1,
"season": [
{
"_id": 2,
"pubDate": ISODate("2021-08-04T00:00:00Z"),
"show_id": 1
}
],
"show": 1
}
]
|
I am trying to limit the returning array to just 1, currently when I run this it shows me all the results under the addedseasonHow do I limit the season to return 1.{
'from': 'episode',
'let': {
'episodeId': '$_id'
},
'pipeline': [{
"$match": {
"$expr": {
"$eq": ["$show_id", "$$episodeId"]
}
}
}, {
"$sort": {
"pubDate": -1
}
}],
'as': 'season'
}Basically all I need from the sub-array is the first season number in the picture below you can see it is season 3.when I add the limit to the pipeline I get "Stage must be a properly formatted document."
|
mongodb pipeline limit to 1
|
This is totally possible, AWS even provides a lot of resources / guides for that topic for examplethis walkthroughhelps you building a pipeline with a production and a staging stack.
|
I'm using Codepipeline and CloudFormation and wanted to know if we can deploy the code at once on multiple environments like lab,dev & prod using Codepipeline, to have an option to select which and which envs the code should be deployed on? Is this possible or not? and how can I achieve this. Thank you
|
AWS Codepipeline - Deploy on multiple environments at once
|
It would appear that uploading a file over the trigger API is not possible. Anew feature requesthas been made to Gitlab.Feel free to push it if you agree that this feature is needed!
|
I want to push a file to a Gitlab pipeline from an external process usingcurlor a similar tool.Uploading the file can be accomplished with aGitlab Trigger API request:curl -X POST \
-F "token=$(cat .gitlab-trigger)" \
-F "ref=develop" \
-F "variables[env]=qua" \
-F "[email protected]" \
https://gitlab.company.com/api/v4/projects/1234/trigger/pipelineThe pipeline job can then access aTRIGGER_PAYLOADfile similar to:{
"ref": "develop",
"variables": {
"env": "qua"
},
"bundle": {
"filename": "bundle.zip",
"type": "application/octet-stream",
"name": "bundle",
"tempfile": "#\u003cFile:0x00007fcc8b7581e0\u003e",
"head": "Content-Disposition: form-data; name=\"bundle\"; filename=\"bundle.zip\"\r\nContent-Type: application/octet-stream\r\n"
},
"id": "1228"
}Judging from the file content it would appear that thebundle.zipfile is uploaded to the Gitlab server.How can I get hold of thebundle.zipfile? Is it even possible?Please note thatNeither the bundle nor the temp file is found in the current dir or in the temporary parent dir of the TRIGGER_PAYLOAD file.Specifying the payload file as avariables[bundle]form param makes Gitlab reject the request as only strings and map variables are supported.Submitting thetokenandvariables[env]variables as query params and adding the ZIP file as the binary only payload (no form params) makes the upload fail.
|
How can I download a binary file submitted to gitlab over a trigger?
|
I solved this issue by adding step "- checkout: repository" in template. Now, the the "Required template" check added to the service connection working fine, stopping main pipeline to invoke other then allowed templates.
|
I am extending a template through repository resource in a yml pipeline (mainpipeline.yml), all working fine. project’s, repo folder structure details below.My template is in OrgA -> proj1 -> repoX -> branch -> templates/set1/template1.yml (only one stage in this template with one job and 3 tasks)My main pipeline is in OrgA -> proj2 -> repoY ->branch -> pipelines/mainpipeline.ymlCreated service connection sc1 in OrgA -> proj2 to invoke templates from OrgA -> proj1 -> repoXBut the issue is, added required template check to enforce “mainpipeline.yml” extends “template1.yml" with help of steps provided byMicrosoft documentation. check not working, it’s not restricting main pipeline to invoke other then the one added in the check, there is no pass or fail info.
|
Azure DevOps "Required template check" not working
|
CI_DOKKA_KMP: "[ci skip]Generated doc for KMP"
workflow:
rules:
- if: $CI_COMMIT_MESSAGE =~ /^\[ci skip\]/
when: never
- when: alwaysIt's solved :)
|
I have a problem with setting rules for the pipeline. I need to set when the pipeline is running and when is not running for $CI_COMMIT_MESSAGE. But somewhere is the problem.I need rules:if the commit message is "Generated doc for KMP" then stop the pipelineif the commit message is not "Generated doc for KMP" then run the pipelinevariables:
CI_DOKKA_KMP: "Generated doc for KMP"
workflow:
rules:
- if: '$CI_COMMIT_MESSAGE == "$CI_DOKKA_KMP"'
when: never
- when: always
|
Problem with setting worflows rules for pipeline
|
There is two way to perform this operation.Initialize the Transformer/Estimator with the hyperparameter:pipe = Pipeline(
steps=[('pca', PCA(n_components=2)),
('clf', DecisionTreeClassifier())])Fix the hyperparameter in the param_grid:param_grid = {
'pca__n_components': [2]
}
grid = GridSearchCV(pipe, param_grid)Notes:
The dimension of the PCA is an important parameter to fine tuned your model. An optimaln_componentswill certainly boost the performance of your model while reducing its complexity.
|
I am trying to first apply PCA to the original data, and then use decision tree for classification.For PCA, I just want to fix the n_components, and for decision tree, I am using GridSearchCV to find best hyperparameter settings.How do I make sure that n_components does not change? Can I fix it when I am defining PCA in the pipline, and does not mention any setting for PCA in param_grid of GridSearchCV?Or shall I fix it in the param_grid of GridSearchCV like 'PCA__n_components':[5]?
|
Can I use fixed parameter settings for one component when using GridSearchCV with pipeline?
|
You can define variables andlimit their scopeBy default, all CI/CD variables are available to any job in a pipeline. Therefore, if a project uses a compromised tool in a test job, it could expose all CI/CD variables that a deployment job used. This is a common scenario in supply chain attacks.GitLab helps mitigate supply chain attacks by limiting the environment scope of a variable.GitLab does this by defining which environments and corresponding jobs the variable can be available for.See "Scoping environments with specs" and "CI/CD variable expression"deploy:
script: cap staging deploy
environment: staging
only:
variables:
- $RELEASE == "staging"
- $STAGING
|
I'm trying to migrate a BitBucket pipeline to GitLab. In BitBucket we use a different set of environment variables for each deployment (staging/production etc).I don't know how to specify this in GitLab.I've set up just group variables and variables specific to the repository but I've not found how to override e.g. DB name for different deployments.Thank you in advance for your help.
|
Use different environment variables per deployment in GitLab
|
You can add a rule to your build job that it should run on merge-requests and if the branch which is commited on has the namedevelop.build:
stage: build
script:
- build
rules:
- if: '$CI_COMMIT_BRANCH == "develop" || $CI_PIPELINE_SOURCE == "merge_request_event"'
|
I have a set up a pipeline .gitlab-ci.yml where build and deploy to dev works fine.
The rule is to run a pipeline for QA after the merge request from develop to QA.When I create a merge request it runs a pipeline only for this stage ,it is not running build stage again. Can you please help me how to make build run for every stage.stages:
- build
- deploy dev
- deploy release
build:
stage: build
script:
- build
deploy dev:
stage: deploy dev
environment: DEV
only:
- develop
deploy release:
stage: deploy release
dependencies:
- build
environment: QA
rules:
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
- if: $CI_MERGE_REQUEST_SOURCE_BRANCH_NAME == "develop"
|
Build not running in CICD after merge request
|
It is possible to createschedulesin gitlab. And you can specifiy jobs to run in schedules or not (see section "Using only and except" in the link above).So you can mix scheduled tasks and manually triggered task in one pipeline (which effectively get two or more pipelines).
|
I'm looking for a way how to split gitlab-ci.yml to make it possible to run some scripts within one GitLab repository.I have some .py and .ps scripts in my git repository. I'd like to setup a scheduler to execute some of them at specific time. But sometimes I need to execute some other scripts manually.
As far as I understand, it is possible to have only 1 pipeline within 1 repository. So, it seems, I can't cover all necessary scenarios within a single pipeline to run different scripts depends on needs.Is there any possible solution to get this possible?
I'm novice in Git CI, so all advices will be useful.Thank you in advance!
|
How to split gitlab-ci.yml to make it possible to run some scripts within one GitLab repository
|
The pre-defined variableBuild.Repository.Nameis the name of the triggering repository not the name of the repo that triggered the build. See:Build variables (DevOps Services)for details.In addition, by reference to this doc:Check out multiple repositories in your pipeline, thecheckoutstep is validated when queuing this build, I am afraid that you cannot change its reference repository name usingruntime variable, which can set different values to scripts and tasks at runtime.Therefore, as a workaround that you could use the pre-defined variableBuild.TriggeredBy.ProjectIDto getID of the projectthat contains the triggering build. And usinguild.TriggeredBy.DefinitionIdto get theDefinitionIDof the triggering build.After that using this Rest API:Definitions - Getto get the repository that triggered the build in repository.url from the response. Its format will be likehttps://dev.azure.com/{organization}/{project}/_git/{repository name}.Finally, you could add aCommand Line taskto checkout this repository using commandgit clone https://username:[email protected]/{organization}/{project}/_git/{repository name}, you could generate aPAT with full access.
|
I am using Azure git, and my Pipeline is using multi-repo triggers.I want to checkout the repo that triggered the build. I see thatcheckout: selfuses the repository where the .yml file lives, not the one that triggered the build.I found that$(Build.Repository.Name)holds the name of the repo that triggered the build.So in my .yml file, I tried to pass it to thecheckoutstep but got an error:resources:
repositories:
- repository: A
type: git
name: Dev/A
trigger:
- '*'
- repository: B
type: git
name: Dev/B
trigger:
- '*'
- repository: C
type: git
name: Dev/C
trigger:
- '*'
pool: Default
steps:
- checkout: git://Dev/$(Build.Repository.Name)@refs/heads/master
- checkout: git://Dev/B@refs/heads/masterThe error message is:The pipeline is not valid. The repository $(Build.Repository.Name) in project Dev could not be retrieved. Verify the name and credentials being used.How can I pass the variable to the checkout step?
|
In Azure can I dynamically checkout a repo when using multi-repo triggers?
|
Please validate if you have enabledcustom cachemodeIn the build project configuration, under Artifacts, expand Additional Configuration. For Cache type, choose Local.
|
I'm trying to use AWS Codebuild - Local Custom Cache. I'm failing to perform the simplest task of caching a file between builds. What's explained here:AWS CodeBuild local cache failing to actually cache?doesn't work for me.This is my buildspec.ymlversion: 0.2
phases:
pre_build:
commands:
- CACHE_DIR='docker-img-cache'
- CACHED_FILE='docker-img-cache/test.md'
build:
commands:
- ls $CACHE_DIR
- ls -sh $CACHED_FILE || true
# If file exist, print its content and size
- if [ -f $CACHED_FILE ]; then echo "File is in cache"; ls -sh $CACHED_FILE; fi
# Else create file, print its size
- if ! [ -f $CACHED_FILE ]; then echo "Hello cache world" > $CACHED_FILE; echo "File created"; ls -sh $CACHED_FILE; fi
- ls $CACHE_DIR
cache:
paths:
- 'docker-img-cache'Only the directory gets cached, but without the file inside. I've already tried with the'/*'/**/*'suffixes.If you try it on your own, you will be able to see that file gets created on every build. However the directory exists.
|
Why AWS Codebuild Local - Custom Cache is not caching file?
|
The filter is working correctly but you need a finalForEach-Objectfor processing results in the pipeline.Get-Content .\adresses.txt `
| Test-NetConnection `
| Where-Object { $_.PingSucceeded } `
| ForEach-Object { Write-Host "SUCCEDED: $($_.ComputerName)" }Note: if you also want to silent warnings fromTest-NetConnectionplease checkthis question.
|
I'm trying to create simple "ping" job using PowerShell. And I want to do it in "pipeline" way. Though it looks like Where-Object receives strings, not objects of TestNetConnectionResult class. Could you please explain how to filter out the results of Test-NetConnection where ping was successful?Get-Content .\adresses.txt | Test-NetConnection | Where-Object { $_.PingSucceeded } | Write-Output
|
Apply filter to Test-NetConnection result pipeline
|
The main method issklearn.utils.estimator_html_repr; see itsAPI docs, theUser Guide, and the code inthis file.That function callsstr(estimator), so that is where most estimators generate their outputs for the html (as you see in your example, with theDummyPipelineprinting"__str__"). Meta-estimators (like the pipeline) get examined inthis sectionof the code, andPipelineitself gets some special treatment in its_sk_visual_block_methodhere.So, depending on what exactly you want to change, there are many places you might need to do so. I have monkey-patched the_STYLEconstant from theestimator_html_repr.pyfile before; since there are not too many composite estimators, this can work well enough.
|
Sklearn has a nice and rather unknown visualization that can be activated viasklearn.set_config(display='diagram'). I am trying to customize the output of the visualization and cannot figure out how the html output is generated. I know python's magic methods __str__ and __repr__ that can be used to create a textual representation of some object. I expected that __repr__ would be used to create the html output. To test this assumption, I overwrote the method to output the string "repr". As the following code and its output show, the __repr__ method is called but obviously it is not used as entrypoint for the html generation since that would result in a single output: "repr".import sklearn
from sklearn.base import BaseEstimator
from sklearn.pipeline import Pipeline
sklearn.set_config(display='diagram')
class DummyPipeline(Pipeline):
def __repr__(self, *args):
print("repr")
return "__repr__"
def __str__(self, *args):
print("str")
return ("__str__")
class DummyEstimator(BaseEstimator):
def fit(self, X, y=None):
pass
def transform(self, X, y=None):
pass
DummyPipeline(steps=[('first_estimator', DummyEstimator()), ('second_estimator', DummyEstimator())])This returns:The question is therefore: Which method would I need to change the html representation?
|
How to modify sklearn's pipeline visualization (what it used instead of __repr__ and __str__?)
|
The Copy activity in Azure Data Factory v2 does have the ability to add extra columns such as variables and filepaths at run time. Use variables like$$FILEPATH:See the official docs for more info:https://learn.microsoft.com/en-us/azure/data-factory/copy-activity-overview#add-additional-columns-during-copy
|
I’m trying to create a pipeline on Azure data factory.
The pipeline pick up a file from azure file store and copies the data to a sql table. It works fine using a copy data task, but I want to include the name of the file in a column in the sql table.
Is this possible?
|
Azure Data Factory ADF data pipeline to include filename in copy data to sql databse
|
I think you can excute the Pipeline A manually in the PipielineB.trigger Pipeline_B_A run monthly.trigger Pipeline_A run every 2 hours.
|
I have an import pipeline (Pipeline A) that runs every 2 hours, it imports the Data from local SQL DBs up into my Azure DB.Part of this data is used with integration into a 3rd party system (that is updated every 2 hours) the other parts of this data are used for Reporting.I have a Monthly Report Pipeline (Pipeline B) that does the following:Executes Stored Procedures, Outputs the results to a Blob, then triggers a Logic App to upload the contents of the Blob to the destination.However, If Pipeline B is triggered whilst Pipeline A is running, then there will be errors.I would like to create a Monthly trigger for Pipeline B, but make it so that it is only ever run after Pipeline A first completes on the first day of the Month.Ideas?
|
Trigger Pipeline B on Monthly schedule and when Pipeline A is completed
|
Instead ofcontext.run(example_gen)try running:from invoke import task
@task
def build(c):
c.run(example_gen)and make a new cell:!invoke build
|
I'm reading the textbook "Building Machine Learning Pipelines: Automating Model Life Cycles with TensorFlow" textbook and one example shows you how to read CSV file and convert to tf.example data structure. However I'm really confused as to what they're doing with the directories on the line:data_dir = os.path.join(os.pardir, "data")
examples = external_input(os.path.join(base_dir, data_dir))Even the official Tensorflow exampleGen page shows:examples = csv_input(os.path.join(base_dir, 'data/simple'))
example_gen = CsvExampleGen(input=examples)I just don't understand what this 'data' directory refers to? I have a jupyter notebook and my csv file called "student_scores" in the the same directory - so how would I used CsvExampleGen to ingest the data and convert to tf.example?I've attatched both the example in the textbook and my error message as well. If anyone can help me out that would be a huge help. Mainly i'm trying to figure out how to read a local csv file using CsvExampleGen. Thanks!
|
Error in Data Ingestion part (CSV File) using CsvExampleGen in TensorFlow
|
SeeBuilds by source changes:Alternatively, instead of polling on a fixed interval, you can use a URL trigger (described above), but with/pollinginstead of/buildat the end of the URL. This makes Jenkins poll the SCM for changes rather than building immediately. This prevents Jenkins from running a build with no relevant changes for commits affecting modules or branches that are unrelated to the job. When using /polling the job must be configured for polling, but the schedule can be empty.Pipeline→ Definition:Pipeline Script from SCM→ SCM → Additional Behaviours →Add→Polling ignores commits with certain messages→:If set, and Jenkins is set to poll for changes, Jenkins will ignore any revisions committed with message matched to Pattern when determining if a build needs to be triggered.This can be used to exclude commits done by the build itself from triggering another build, assuming the build server commits the change with a distinct message.[Emphasis by me.]Disclaimer: Not tested in real, just RTFM.
|
I am having a Maven project and I am trying to create a CI pipeline using Jenkins for releasing the project on the commit/merge request. Able to successfully release the new version but stuck in looping issue.Steps:Create Jenkinsfile in the project.Create pipeline project in Jenkins.Enable Webhook in GitLab -> integration.Push the code from local machine by developer to GitLab repo with version as 1.0.0-SNAPSHOT.Pipeline triggered automatically, since webhook is enabled.Maven build and test run successfully.Maven Release prepare and perform with "ci skip as commit prefix" is committing to GitLab repo with version as 1.0.1-SNAPSHOT (Next Version).Again Pipeline triggered, since new commit has been pushed.As of now in Jenkins, I am checking the commit message contains skip ci and skipping the staging. Because of this for every single commit two time pipeline is triggered.In Azure Pipeline we are able to stop the looping by giving***NO_CI***.Could you please suggest a best way to handle this in Jenkins pipeline or in GitLab webhook?
|
Jenkins + GitLab CI Pipeline Maven project
|
So the answer to my question is that I first must create a class, like so:class DataframeFunctionTransformer():
def __init__(self, func):
self.func = func
def transform(self, input_df, **transform_params):
return self.func(input_df)
def fit(self, X, y=None, **fit_params):
return selfThen once this class is created, I can create my own function, which was to add a new column (the isChild column) to the titanic Dataframe:def ischild(dataset):
dataset['Child'] = dataset['Age'].apply(lambda x: 'Yes' if x<13 else 'No')
return datasetNow when creating a pipline using sklearn I can use my new function like so:from sklearn.pipeline import Pipeline
pipeline = Pipeline([
("ChildColumn", DataframeFunctionTransformer(ischild))
])Thank you.
|
I apologize but I'm not sure how to explain this without just giving my example, therefore:For the titanic dataset in Kaggle, some people have added a new column called 'isChild' and applied it to the age column to where if the age is under 13 he is a child or else he/she is an adult. From there they are fine to preprocess and create and tune their model.If I were to create that same model and deploy it to where anyone can fill a form on the frontend with theoriginalinputs of the Dataframe, the model wouldn't work because the 'isChild' is computed during the preprocessing part.I understand people use Pipeline and make_pipline to create a process but my question here is that people always add the generic steps in the Pipeline like PCA or imputing missing values. How do I add a step that adds this new column and then runs it through the model?if you can guide me or link me with something helpful or answer this question it would be much appreciated.
|
Creating new Pandas Dataframe column within pipeline process
|
From seeing the source code ofPipelineimplementation, the estimator used to fit the data goes on the last position of your steps, the_final_estimatorproperty of Pipeline calls the last position of Pipeline's steps.@property
def _final_estimator(self):
estimator = self.steps[-1][1]
return 'passthrough' if estimator is None else estimatorwherestepsmight be something likesteps = [('scaler', StandardScaler(copy=True, with_mean=True, with_std=True)),
('svc',
SVC(C=1.0, break_ties=False, cache_size=200, class_weight=None, coef0=0.0,
decision_function_shape='ovr', degree=3, gamma='scale', kernel='rbf',
max_iter=-1, probability=False, random_state=None, shrinking=True,
tol=0.001, verbose=False))]The_final_estimatorproperty is just called, after fitting all the transforms one after the other, to get the estimator to be fitted to the model, see line333for details.So, consideringsteps, I can retrieve anSVCclass from it's last positionfinal_estimator = steps[-1][1]
final_estimator
>>> SVC(C=1.0, ..., verbose=False)and fit it the training datafinal_estimator.fit(Xt, y)whereXtis the transformed training data (calculatedbefore fitting the estimator) andythe training target.
|
I have a sklearn pipeline that consists of a custom transformer, followed by XGBClassifier. What I would like to add as a final step in the transformer is another custom transformer that transforms the results of the XGBClassifier.This last custom transformer will rank the predicted probabilities into ranks (5-percentiles).Pipeline([
('custom_trsf1', custom_trsf1),
('clf', XGBCLassifier()),
('custom_trsf2', custom_trsf2)])The problem is that the sklearn pipeline requires that all steps (but the last) should have a fit and transform method. Can I solve this in another way instead of extending the XGBclassifier and adding a transform method to it?
|
Transform results of estimator in a sklearn pipeline
|
Programmatical re-execution of part of the pipeline require identifying ID of a parent solid which is available:parent_run_id = instance.get_runs()[0].run_idThen reexution of the pipeline:result = reexecute_pipeline(inputs_pipeline, parent_run_id=parent_run_id,
step_keys_to_execute=['step2.compute', 'step3.compute'],
run_config=run_config, instance=instance)
|
I created a test pipeline and it fails mid-way. I want to programmatically re-execute it but starting at the failed step of the pipeline and move forward. I do not want to repeat execution of the earlier, successful steps.from dagster import DagsterInstance, execute_pipeline, pipeline, solid, reexecute_pipeline
from random import random
instance = DagsterInstance.ephemeral()
@solid
def step1(context, data):
return range(10), ('a' + i for i in range(10))
@solid
def step2(context, step1op):
x,y = step1op
# simulation of noise
xx = [el * (1 + 0.1 * random()) for el in x]
xx2 = [(el - 1)/el for el in xx]
return zip(xx, xx2), y
@solid
def step3(context, step2op):
x, y = step2op
...
return x, y
run_config = {...}
@pipeline
def inputs_pipeline():
step3(step2(step1()))
|
Dagster: how to reexecute failed steps of a pipeline?
|
yes it seems that once there is no more variable inside the parenthesis it works:
it manages to update only if there is string content in the parenthesis.and when it succeeds it prints the version number too:Configuration:
Version: 1.20.1120.05
File version: 1.20.1120.05
InformationalVersion: 1.20.1120.05
Custom attributes:
|
I have a single file where I specify my version:I try to update it with the 'Update AssemblyInfo' task on build pipeline:when executing the task do logs the update:Searching for files...
============================================================
D:\BuildAgent01\_work\21\s\AutomationGeneral\Properties\SharedAssemblyInfo.cs
Updating attributes
-------------------
Skipped 'AssemblyProduct' (no value defined)
Updating 'AssemblyVersion'...
Updating 'AssemblyFileVersion'...
Updating 'AssemblyInformationalVersion'...
Saving changes...But the content of the file is not updated: it stills remains:[assembly: AssemblyFileVersion(SolutionItem.Version)]Do I need to remove the const string and put the string content inside of the parameter of the method?[assembly: AssemblyFileVersion("1.1.1.1")]so the replacement will be really done?is not the Task messing here? or is it me?
|
Update AssemblyInfo const variable
|
Yes. The following example will echo${TEST_VAR}depending on the branch (dev or main)..test-job:
stage: test
script:
- echo "${TEST_VAR}"
test-dev-job:
variables:
TEST_VAR: "dev-value"
rules:
- if: $CI_COMMIT_BRANCH == "dev"
extends: .test-job
test-main-job:
variables:
TEST_VAR: "main-value"
rules:
- if: $CI_COMMIT_BRANCH == "main"
extends: .test-job
|
I have set up my GitLab pipeline and I'm using GitLab CI variables to generate my configuration file during the build phase. Now we've set up a couple new environments, with each having its own database and other credentials, so I need to generate my configuration file using each environment's variables based on branch. I've already seen:https://gitlab.com/gitlab-org/gitlab/-/issues/14223https://gitlab.com/gitlab-org/gitlab-foss/-/issues/13379https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/8858https://medium.com/spacepilots/sourcing-environment-variables-in-gitlab-ci-or-a-poor-mans-dotenv-dfc33ca231dfSome users suggested sourcing environment variables from files but that isn't really a solution because we want to limit access to these variables.
Is there a solution or workaround to this problem in.gitlab-ci.yaml?
|
Use variables per branch in gitlab
|
May be you can change your classifier and tokenizer, to pass aroundmax_lenparameter. Then, only grid search with tokenizermax_lenparameter.
Not the cleanest way, but might do.from sklearn.base import BaseEstimator, TransformerMixin, EstimatorMixin
class TokeinizePadding(BaseEstimator, TransformerMixin):
def __init__(self, max_len, ...):
self.max_len = max_len
...
def fit(self, X, y=None):
...
return self
def transform(self, X, y=None):
data = ... # your stuff
return {"array": data, "max_len": self.max_len}
class KerasClassifier(...):
...
def fit(data, y):
self.max_len = data["max_len"]
self.build_model()
X = data["array"]
... # your stuff
|
I am trying to do some hyper-parameter tuning in my pipeline and have the following setup:model = KerasClassifier(build_fn = create_model, epochs = 5)
pipeline = Pipeline(steps =[('Tokenizepadder', TokenizePadding()),
('NN', model)] )Where I have a variable 'maxlen' in both the Tokenizepadder and my Neural Network (for the Neural Network it is called max_length, I was afraid naming them the same would cause errors later in the code). When I try to perform a grid search, I am struggling to have these values correspond. If I perform grid search for these values seperately, they won't mach and there will be a problem with the input data not matching the neural network.In short I would like to do something like:pipeline = Pipeline(steps =[('Tokenizepadder', TokenizePadding()),
('NN', KerasClassifier(build_fn = create_model, epochs = 5, max_length = pipeline.get_params()['Tokenizepadder__maxlen']))] )So that when I am performing a grid search for the parameter 'Tokenizepadder__maxlen', it will change the value 'NN__max_length' to the same value.
|
Pipeline GridSearchCV, corresponding parameters in different steps
|
You have two options:Assign the output of theInvoke-VMScriptcmdlet to$null.Pipe the output of theInvoke-VMScriptcmdlet to theOut-Nullcmdlet.Example:$null = Invoke-VMScript -VM $VMs_Name$i -ScriptText $script1 -GuestUser root -GuestPassword xsignnet1 -ScriptType Bash
# or:
Invoke-VMScript -VM $VMs_Name$i -ScriptText $script1 -GuestUser root -GuestPassword xsignnet1 -ScriptType Bash | Out-NullDemonstration:function Verb-Noun
{
"normal output, shouldn't be printed when pipe to Out-Null"
Write-Warning "you should see me, I am a warning"
Write-Error "you should see me, I am an error"
}
Verb-Noun | Out-NullOutput:
|
Noob asking for help hereI have few scrips written for users to run, i would like the users to see ONLY Error and warning output without all the success output.Online search lead me to believe i need to present the piplines of Error and Warning, but nowhere i could find a way to implement getting only those two in a script, can some one please explain how do i implement in a script seeing only Error and warning?For example the following part of the script shows the output below it, user experience is a lot of output which an error get lost in.$script1 = "sudo sed -i 's/'$hostn'/'$VMs_Name$i'/g' /etc/hosts"
Invoke-VMScript -VM $VMs_Name$i -ScriptText $script1 -GuestUser root -GuestPassword xsignnet1 -ScriptType Bash
$script2 = "sudo sed -i 's/'$hostn'/'$VMs_Name$i'/g' /etc/hostname"
Invoke-VMScript -VM $VMs_Name$i -ScriptText $script2 -GuestUser root -GuestPassword xsignnet1 -ScriptType BashVM : testing_vm_1
ExitCode : 0
ScriptOutput :
Uid : /VIServer=vsphere.local\[email protected]:443/VirtualMachine=VirtualMachine-vm-541/VMScriptResult=1665179762_0/
Length : 0
VM : testing_vm_1
ExitCode : 0
ScriptOutput : sudo: unable to resolve host Ubuntu-Template: Resource temporarily unavailable
Uid : /VIServer=vsphere.local\[email protected]:443/VirtualMachine=VirtualMachine-vm-541/VMScriptResult=-942405014_0/
Length : 79
|
Show only Errors and warnings in Powershell
|
Here's how I got the ID from json response.def response = sh(script: 'curl -X POST -H "Authorization:test" -H "content-type: multipart/form-data" https://api/upload', returnStdout: true)
def responseObject = readJSON text: response
def ID = "$responseObject.id"
println("ID: $ID")
|
I'm creating a CI for my app using jenkinsBelow is an additional script I call after building my appscript{
sh 'curl -X POST -H "Authorization:test "https://api/upload" -F "file=@path"'
}Above script will return json response, how can I extract the ID field from json and store it on a variable?
|
Extracting json response from http request on jenkins pipeline
|
Thanks to Mohamed for sharing and confirming.Summary.Issue description:Azure pipeline is running old commit version files instead of latest commit.Answer:According to Mohamed's description: It worked perfectly after clearing the cache of the pipeline.
|
I'm trying to run a pipeline with Azure DevOps, I have a problem that it's not running the tests of the current version in the master branch. It's running older files, files from the old commit. I don't know why this is happening. The old files shouldn't be run because they don't exist anymore in the master branch after the last pull request is complete and merged.
Has anyone any idea how to solve this problem?
|
Azure pipeline is running old version files (before last commit)
|
As suspected, this is a pipelining issue w.r.t the amount of the data a particular USB port can carry.To prevent frame drop or overload of data through the USB, it has to be connected to a Motherboard that has USB 3.1 Gen 1 specifications.Refer to page 78 of this documenthttps://www.intelrealsense.com/wp-content/uploads/2020/06/Intel-RealSense-D400-Series-Datasheet-June-2020.pdfMy AMD machine does not have the in-build USB 3.1 Gen 1 specifications (has USB 3.0) and hence the overload.
|
It works like a charm when theRGB moduleof the camera has a resolution of 1280x720, and FPS as 15 frames/sec. The depth mode and IMU work fine in all the settings.But if the resolution is increased above 1280x720 - 15 frames/sec, I face aRuntime error: backend-v412.cpp:988 - Frames didn't arrive within 5 seconds.Other forms of this error:10:41:49 [Warn] .../backend-v4l2.cpp:988 - Frames didn't arrived within 5 secondsIt seems like the pipeline is not able to handle the framebuffers, and there is quite a lot of drop in the frames, specifically, if the resolution is kept above 1280x720, 15 frames/sec.See the graph below with the resolution of1280x720, 30 frames/sec. How do I correct the above?
|
Intel RealSense D435i frames drop on Intel® RealSense™ SDK 2.0
|
It is doing an exact name match of the build tag. If you wanted this to work, you'd need to put the hard-coded build number. Not the template for creating the build number.For my product, we build in both debug and release configurations. I will stamp a tag either { debug, release}. Then on the release pipeline, you might have the build tag only flagged for { release }, so you don't ever deploy debug copies.
|
I'm trying to understand how the option Build tags works inside the Continuous deployment trigger of the release pipeline.build tagsHere I can add Build tags, in my build pipeline is set the following build tags:$(Build.DefinitionName)_$(Build.BuildNumber)But when I put the same inside the build tags at the Release pipeline. It won't do anything.The tag's are succesfully added:tagWhat I'm trying to archive is that when a build is successful a tag is created and the release pipline is triggered when the tag is the same as the one I set after the build.Is this how it should work, or do I mix thing up?
|
Azure DevOps Release pipeline using build tags
|
I assume that rendering of pipelines won't be optimized in the classic Jenkins UI anymore. Try theBlue Ocean UI, which is available as a Jenkins plugin. It renders pipelines more pleasingly:
|
I am creating a trigger job that starts a batch of jobs using a matrix pipeline like this:
pipeline {
agent anystages {
stage ('Build') {
matrix {
axes {
axis {
name 'PLATFORM'
values 'centos-6', 'centos-7', 'ubuntu-14.04', 'ubuntu-18.04'
}
axis {
name 'PROJECT'
values 'engine', 'documentation', 'monitoring'
}
}
stages {
stage ("building")
{
steps {
build job: "${PROJECT}/${PLATFORM}", parameters:[], propagate:true, wait:true
}
}
}
}
}
}}This works just fine, but the automatically generated user interface for the status report is somewhat suboptimal:As you can see, there are two blocks, even though I only have one stage per cell. I would really like to get rid of that "Matrix" block to reduce the screen width for a quick overview. Is that possible?
Also, the blocks in the report turn green nearly instantly (I think once the job has been triggered.) I would like them to stay "neutral" until the triggered job has been finished. Is that possible?
Thanks in advance!
|
How can I configure the Jenkins Pipeline UI for matrix jobs?
|
I was able to get the most recent commit id when usig Github as my source provider.aws codepipeline get-pipeline-state \
--name my-pipeline-name \
--query 'stageStates[?stageName==`Source`].actionStates[0].currentRevision.revisionId' \
--output textThis assumes the Stage Name isSource(can change to whatever you've named your stage), and that the first action in that state is where you retrieve the source code
|
With help of @maafk I was able to get information about each pipeline inthisquestion. I am now trying to find the commit id for the prod step for each pipeline. I have 40+ pipelines that I need to get commitids for and currently, i am doing it from console, by going to each pipeline, click on the details and copy the commitId from the pop-up.I would like to automate this part and create an output file, that will have the pipeline name and the commit id for each of my 40+ pipelines.I am trying to look for it in the json returned byget-pipeline-statebut not finding it. In some of the examples, I have seen there is a variable defined, where do I find that, if that is the correct way to get commit id?Thank you
|
Retrieve commit Id from AWS Codepipeline using CLI
|
Try this:pipe.steps[-1][1].model_.aic()
|
I need to get the best estimators like AIC, BICimport pmdarima as pm
pipe = Pipeline([
("fourier", FourierFeaturizer(m=12, k=4)),
("arima", pm.AutoARIMA(start_p=1, d=None, start_q=1, max_p=4,
max_d=3, max_q=4, start_P=1, D=None, start_Q=1, max_P=2,
max_D=1, max_Q=2, max_order=10, m=12, seasonal=False,
stationary=False, information_criterion='aic', alpha=0.05,
))])
pipe.fit(y_train,X_train)
pipe.summary()Summary: OutputHow can I fetch estimators value ? Thank you
|
How to get AIC value from the pipe.fit() in pmdarima module
|
Welcome to Stack Overflow! There are several ways of using custom functionality insklearnpipelines — I thinkFunctionTransformercould fit your case.Create a transformer that useszscoreand pass the transformer tomake_pipelineinstead of callingzscoredirectly.I hope this helps!
|
I am trying to normalize my data at each step of the cross-validation and I came across thisquestionAs suggested, I went to the scikit-learn documentation and found this example:from sklearn.pipeline import make_pipeline
clf = make_pipeline(preprocessing.StandardScaler(), svm.SVC(C=1))
cross_val_score(clf, X, y, cv=cv)This looks indeed like what I am trying to achieve, however, my intention is to use az-scorerinstead of the StandardScaler, so I tried this:clf = make_pipeline(stats.zscore(), DecisionTreeClassifier())But I get an error saying this:TypeError: zscore() missing 1 required positional argument: 'a'What should be the argument of zscore()?
|
How to create a scikit-learn pipeline that applies z-score and cross-validation?
|
You can't call a string variable as a method, but you can try this:steps {
script {
"${INSTANCE}adaptation"()
}
}
|
In my Jenkins pipeline I need to call several methods based on the parameter that I get at the run time. For example, If I give the parameter as "Development", it should call the method "Developmentadaptation" similarly for other parameters as well. Below is the code which i tried where INSTANCE is the parameter for the build and if I give the parameter as qa, then it should call the method "qaadaptation"steps {
script {
adaptcall = INSTANCE + adaptation;
adaptcall()
}
}Error Message isPossible solutions: wait(), any(), wait(long), take(int), any(groovy.lang.Closure), each(groovy.lang.Closure)
|
Method call in Jenkins pipeline
|
Unfortunately there's no autorefresh :/ But you could create a GitHuB issue about a feature here...? That would help many...
|
does anyone know how can Iauto refreshspinnaker pipeline's console output AUTOMATICALLY?To check latest logs, I have to close and re-open the output manually everytime :(
|
how to auto refresh spinnaker pipeline's console output
|
I don't think there's a way to set MST as the default time zone in Data Fusion; however, I tried to replicate the scenario and I was able to useparse-as-date DATE_COLUMN MSTto parse the column and insert it into BigQuery with the correct time in UTCMar 11, 2020, 11:45:40 AM UTC.
|
I'm am building a pipeline in Data Fusion where we use the Database Plugin to ingest data from our on-prem Oracle DB and insert into a BigQuery table. The Database Plugin correctly inferstimestampdata types for date fields in our Oracle tables. The issue is, however, that the date fields are actually in MST timezone. Data Fusion, however, assumes they are in UTC.Ex: Date in on-prem DB isMar 11, 2020, 5:45:40 AM MSTand it comes up asMar 11, 2020, 5:45:40 AM UTCin BigQuery.In the pipeline, I am using the Wrangler Plugin to transform column data types using directives. I tried using theparse-as-date DATE_COLUMN US/Mountaindirective, but it did not work.I have asked GCP support if there's a way to set default Data Fusion timezone to MST. I'm asking here to see if there's a way to do it with Plugins.
|
How to assume timestamp is MST (US/Mountain) instead of UTC
|
After some investigation, we figured out that at this moment it is not possible to implement this plugin in the pipeline job.
|
I have freestyle Jenkins job with Jira Release Version Parameter:I need to migrate this job to use Jenkins pipeline instead of freestyle job. Can somebody give me a piece of pipeline code (in declarative language) which does that?
|
How to write jenkins pipeline for Jira Release Version parameter
|
You can export a pipeline from either the pipeline list view or pipeline detail view and import it in pipeline studio when you need it.Pipeline List View ExportPipeline Detail View Export
|
Closed. This question needsdetails or clarity. It is not currently accepting answers.Want to improve this question?Add details and clarify the problem byediting this post.Closed4 years ago.Improve this questionIs it possible to keep the pipeline even after the Data Fusion instance is deleted? We are planning to delete the instance every day at EOD.
|
How to save the pipeline after deletion of Data Fusion instance [closed]
|
Theaudiomixerelement does take multiple audio streams and mixes them into a single audio stream.
|
I currently know how to blend two videos into one, it was very hard to learn how to do this (more than 30 continuous hours researching), I've used the following pipeline:gst-launch-1.0 filesrc location=candidate.webm ! decodebin ! videoscale ! video/x-raw,width=680,height=480 ! compositor name=comp sink_1::xpos=453 sink_1::ypos=340 ! vp9enc ! webmmux ! filesink location=out.web filesrc location=interviewer.webm ! decodebin ! videoscale ! video/x-raw,width=200,height=140 ! comp.In this case I'm blending two videos so that the second of them is in the right bottom corner, and the first one is the "background". Well, does somebody knows how can I get both audios in the same file too? I hope someone find useful my pipeline.
|
How can you blend two videos (with their both audio) in a single video using gstreamer?
|
In order to achieve the first step in the tutorial you are followingIngest CSV (Comma-separated values) data to BigQuery using Cloud Data Fusion.You need to set up a functioning pub/sub system. This can be done via the command line, the console, or in your case the best would be to use, one of the client libraries. If you follow this tutorial you should have afunctioning pub/sub system.At that point you should be able to follow the original tutorial
|
I have been trying to build a pipeline in Google Cloud Data Fusion where data source is a 3rd party API endpoint. I have beenunable to successfully use the HTTP Plugin, but it has been suggested that I use Pub/Sub for the data ingest.I've been trying to followthis tutorialas a starting point, but it doesn't help me out with the very first step of the process: ingesting data from API endpoint.Can anyone provide examples of using Pub/Sub -- or any other viable method -- to ingest data from an API endpoint and send that data down to Data Fusion for transformation and ultimately to BigQuery?I will also need to be able to dynamically modify the URI (e.g., date filter parameters) in the GET request in this pipeline.
|
Google Cloud Pub/Sub to ingest data from API endpoint and publish as message
|
This was reported 3 years ago ingitlab-org/gitlab-runnerissue 1809: "Use variable inside other variable".A workaround is to set vars in abefore_scriptinstead ofvariables.So the example given in the issue would work if written as:before_script:
- export VAR1="${CI_PIPELINE_ID}"
- export VAR2="test-${VAR1}"UpdateFebruary 2020:Philippe Charrièreadds:the issue is not closed - themilestone is 13.0(for May 2020)UpdateSept. 2021,One lastvalidationis pending and if that's solved then we are shipping this thing in 14.3.So, soon (14.3, Sept. 2021):this-job-name-123:
except:
variables:
- $CI_COMMIT_MESSAGE =~ /Skip $CI_JOB_NAME/i
script:
- echo The job runs unless the commit message contains "Skip this-job-name-123"
|
Here is my simple yaml fileimage: my/docker/image
stages:
- print
- testvarbridge
variables:
INCOMING_VAR: $ENV_VAR
print_these:
stage: print
script:
- echo $INCOMING_VAR
- export $INCOMING_VAR
testvarbridge:
stage: testvarbridge
variables:
TEST_VAR: $INCOMING_VAR
trigger:
project: my-project/pipeline-two
branch: ci-cdthe$ENV_VARis a variable in the project for testing... it just says"this_is_the_variable"When I trigger the pipleine..the print stage correctly prints:echo $INCOMING_VAR
this_is_the_variableBut when the second pipeline is triggered, it is just set up to do a simple echo command of the variable that is passed in.. it echo's this:echo TEST_VAR
$ENV_VARAs you can see, the when the testvarbridge stage sets up the variableTEST_VAR,it is grabbing the$ENV_VARvariable up top as aliteral string. It does not evaluate it and grab the value associated with that variable. Am I missing something?
|
Unable to pass variables through gitlab bridge to another pipeline
|
With scikit-learn 0.22, it works like a charm.from sklearn.datasets import load_breast_cancer
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import make_pipeline
from sklearn.metrics import plot_roc_curve
X, y = load_breast_cancer(return_X_y=True)
pipe = make_pipeline(StandardScaler(), LogisticRegression())
pipe.fit(X, y)
plot_roc_curve(pipe, X, y)
|
Is there any way to use plot_roc_curve with a pipeline as the estimator? I mean:plot_roc_curve(pipeline, X_test, y_test)
|
scikit-learn: plot_roc_curve with pipeline as estimator
|
The problem here was non-existence of.dockerignorefile. I simply added.dockerignorefile with the contentpgdbat the same level withDockerfile. This file avoids copying of protectedpgdbfolder to docker container, so no permission error occurs.
|
I have deployed django app with docker container. Source code of the app is on the Bitbucket repository. Now I want to setup pipeline formasterbranch which is intended to make deployment automatic on merge. Problematic part of the pipeline script is below:docker-compose up --build -dAbove line resuts with error that says:Permission denied: '/path/to/docker/volume/pgdb'My docker-compose file section related to postgres is below:postgres:
container_name: arw-postgres
image: postgres:11
ports:
- 5432:5432
volumes:
- ./pgdb:/var/lib/postgresql/data
env_file: .envThe above specifieddocker-composecommand runs normally withsudoprivilege. Actually, I can connect to remote host with root user and can run this command. But I don't want to expose my root user's credentials. How can I recreate my docker container withoutsudoprivileges?
|
Bitbucket pipeline docker volume permissions at recreating containers
|
Yes, I can point you to the documentation if you would like to see if this works for the tables you might already have set up:Is there any way to have a TASK run if a view from a external
Snowflake data share is refreshed, i.e. dropped and recreated?If you create a stored procedure to monitor the existence of the table, I have not tried that before though, I will see if I can ask an expert.Separately, is there any way to guarantee that the task runs at most
once on a specific day or other time period?Yes, you can use CRON to schedule optional parameters with specific days of the week or time: an example:CREATE TASK delete_old_data
WAREHOUSE = deletion_wh
SCHEDULE = 'USING CRON 0 0 * * * UTC';Reference:https://docs.snowflake.net/manuals/user-guide/tasks.htmlmore specifically:https://docs.snowflake.net/manuals/sql-reference/sql/create-task.html#optional-parameters
|
Snowflake's documentation illustrates to have a TASK run on a scheduled basis when there are inserts/updates/deletions or other DML operations run on a table by creating a STREAM on that specific table.Is there any way to have a TASK run if a view from a external Snowflake data share is refreshed, i.e. dropped and recreated?As part of this proposed pipeline, we receive a one-time refresh of a view within a specific time period in a day and the goal would be to start a downstream pipeline that runs at most once during that time period, when the view is refreshed.For example for the following TASK schedule'USING CRON 0,10,20,30,40,50 8-12 * * MON,WED,FRI America/New York', the downstream pipeline should only run once every Monday, Wednesday, and Friday between 8-12.
|
Run Snowflake Task when a data share view is refreshed
|
It sounds to me that you need helper methods that you can reuse, you're most of the way there, just need to extract out the methods.As for getting the combinations correct I'd just hard-code the combinations. It looks like your steps are sequential so there won't be such a test as Step2+Step3 because it would really be Step1+Step2+Step3.Tests are normally data driven and having logic in the test to vary behaviour based on data with something such asCombinatorialadds unwanted logic into the test.class TestPipeline
{
private int doSomeStuff()
{
return someStuff;
}
private int doSomething(int someStuff)
{
return something;
}
private int doSomethingElse(int something)
{
return somethingElse;
}
[Testcase]
public void Step1()
{
////do something
int outputStep1 = doSomeStuff();
return outputStep1;
}
[Testcase]
public void Step1Step2()
{
int inputStep1 = doSomeStuff();
int outputStep2 = doSomething(inputStep1);
return outputStep2;
}
[Testcase]
public void Step1Step2Step3()
{
int inputStep1 = doSomeStuff();
int outputStep2 = doSomething(inputStep1);
int result = doSomethingElse(inputStep2);
}
}
|
I have the following problem.I have a process that is done in 3 steps:Step 1 -> Step 2 ->Step 3I want to be able to test all combinations.Step1
Step1+Step2
Step2
Step2+Step3
Step1+Step2+Step3In order to do this i would like to be able to return something from each of myUnit Tests.
I do not want to create global variables and mutate them every single time.class TestPipeline
{
[Testcase]
public Int Step1()
{
////do something
Int outputStep1=doSomeStuff();
return outputStep1;
}
[Testcase]
public Int Step2()
{
Int inputStep1=Step1();
Int outputStep2=doSomething(inputStep1);
return outputStep2;
}
[Testcase]
public void Step3()
{
Int inputStep2=Step2();
Int result=doSomethingElse(inputStep2);
}
}How can this be done ?
|
How to return value from unit test?
|
This option is used to fail stage if any pipeline expression like${}inside it failed to be processed even if stage itself was successful.For example forEvaluate Variablesstage this option is enabled by default.
|
I am working on Spinnaker Pipeline. I noticed there is an option calledFail stage on failed expressionswhen editing the stag through the Web UI. I didn't find any explanation about it in the docs, could somebody give an example about it?
|
Fail stage on failed expressions in Spinnaker pipeline stag
|
My current solution is as follows.I've skipped the use of pipelines and removed the exclusion of the./vendorfolder from my.gitignorefile. I've created a subdomain with git support, allowing me to pull and deploy my development branch. Did the same for my production/master branch.
|
I'm currently setting up a pipeline (FYI, I'm completely new to CD/CI) in Bitbucket for my Laravel project, that should automatically deploy my latest build of my master branch to my website. Because the server doesn't have composer installed I cannot install the dependencies or deploy the migrations that my project needs.Is it possible to build the entire project using the pipeline and move it completely over the server using something likegit-ftp? Below mybitbucket-pipelines.ymlfile.image: php:7.2-fpm
pipelines:
branches:
master:
- step:
caches:
- composer
script:
- apt-get update && apt-get install -y unzip gnupg ssh
- curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
- composer install
- curl -sL https://deb.nodesource.com/setup_8.x | bash -
- apt-get install -y nodejs
- npm install
- npm run production
- php -r "file_exists('.env') || copy('.env.pipelines', '.env');"
- composer dump-autoload
- php artisan key:generate
- php artisan migrate
- apt-get -qq install git-ftp
- git ftp init --user $FTP_USERNAME --passwd $FTP_PASSWORD $FTP_HOST_PATH_P
|
Laravel project with bitbucket pipeline on shared hosting
|
Actually I do not find a way to import numexpr on Quantopian but on Jupyter it do not give problems. Therefore the problem is related to the online IDE. Moreover, I simply re-write the FastOsc ind. in another way to use it inside the pipeline in the quantopian online IDE.class Fast(CustomFactor):
inputs=(USEquityPricing.close,USEquityPricing.high,USEquityPricing.low)
window_length=14
def compute(self, today, assets, out, close, high, low):
highest_high= nanmax(high, axis=0)
lowest_low= nanmin(low, axis=0)
latest_close= close[-1]
out[:]= ((latest_close - lowest_low) / (highest_high - lowest_low)*100)
|
I'm trying to get some technical ind. with some of those commands in this link:https://github.com/enigmampc/catalyst/blob/master/catalyst/pipeline/factors/equity/technical.py,
but in the quant.notebook I'm not able to get "from numexpr import evaluate", so evaluate is not defined.
How can I solve this?from numexpr import evaluateclass FastochasticOscillator(CustomFactor):
inputs=(USEquityPricing.close,USEquityPricing.high,USEquityPricing.low)
window_safe=True
window_length=14
def compute(self, today, assets, out, closes, highs, lows):
highest_high= nanmax(highs, axis=0)
lowest_low= nanmin(lows, axis=0)
latest_close= closes[-1]
evaluate(
'((tc - ll) / (hh - ll)) * 100',
local_dict={
'tc':latest_close,
'll':lowest_low,
'hh':highest_high,
},
global_dict={},
out=out,
)K= FastochasticOscillator(window_length=14)return Pipeline(columns={'K':K,},screen=base)I'm working on the Quantopian notebook and when I attempt to import it gives me this: InputRejected: Importing evaluate from numexpr raised an ImportError. Did you mean to import errstate from numpy?
|
from numexpr import evaluate on Quantopian
|
Sorry. There is no way to run Node JS during deployment on fortrabbit yet.
|
Using Fortrabbit to deploy aPHPapp which uses a node project for the front end. Is there any way in which I can addnpm run buildto the deploy process instead of having to always build first manually and then deploying it through Fortrabbit?(There is afortrabbit.yamlfile on thefortrabbitsite that needs to be configured for everyfortrabbitapp but theexampledoesn't show how we can add that command in the deployment pipeline.)
|
Add npm run build to Fortrabbit deploy process
|
Their are no callback registry in dataflow out of the box.However, you can set custom alert on stackdriver which can alert you once the pipeline is finished.
|
Is it possible to get a callback after once DataFlow pipline is completed ?
After the pipeline is completed I have to make some configuration changes to the system to use new Output generated by the pipeline and some other cleanup too.
Now I am actually using thewaitUntilFinish()function to halt the program flow and do the configuration changes after that. So while test running in the local system, it halt the developers command prompt or the user have to wait for pipeline to complete.So is there a better way to do it ? Like a callback mechanism ?
|
How to perform a set of action after a Dataflow pipeline is completed?
|
In order to modify multiple nested arrays, the one of the approach is to$unwindthem, modify the data as your requirement and finally aggregate the unwinded data.db.collection.aggregate([
{"$unwind":"$array1"},
{"$unwind":"$array1.array2"},
{"$project":{
_id:1,
"array1.id":1,
"array1.array2.id":1,
"array1.array2.string":{"$concat":["$array1.array2.string","world"]}
}
},
{
"$group": {
"_id": {"obj_id": "$_id", "arr1_id": "$array1.id"},
"array2":{$push:"$array1.array2"}
}
},
{
"$group": {
"_id": {"obj_id": "$_id.obj_id"},
"array1":{"$push":{"id":"$_id.arr1_id", "array2": "$array2"}}
}
}
])
|
I have a string in an array which is inside another array. I would like to concatenate another string with to the already-existing string (which is at the last index of both arrays). I have the indexes of the last elements store in the document{
_id: 'id',
array1: [
0: [...]
...
m: [
0: {}
...
n: {id: 'id', string: 'hello'}
]
],
i: m,
j: n
}I want to concatenate ' world!' to the string atarray1[m][n].stringor, in other words, toarray1[$i][$j].stringI have already tried the following stages, but none have worked.{ $addFields: { array1.$i.$j.string: <expression> }
{ $addFields: { array1.$[i].$[j].string: <expression> }
{ $addFields: { array1.[$i].[$j].string: <expression> }Data{
_id: 'abc',
array1: [
0: {
id: 'def'
array2: [
0: {
id: '123',
string: 'hello'
}
]
}
],
i: 0,
j: 0,
}After running the pipeline, I'd like to get the same structure but with 'hello world' in the string.{
_id: 'abc',
array1: [
0: {
id: 'def'
array2: [
0: {
id: '123',
string: 'hello world'
}
]
}
],
i: 0,
j: 0,
}
|
How to use a field's value as an array index in $addFields stage (aggregation)
|
One of the ways to get access to your directory is through${workspace}More details about workspace hereenter link description here. If you share more details I can probably help for the Jenkinsfile.
|
Want to obtain the Jenkinsfile's name or the path of it.getProtectionDomain and File classes are restricted.Have the following structure
-pipeline-development
--Jenkinsfile-development.groovy
-pipeline-test
--Jenkinsfile-test.groovyWould like to get the development and test in the jenkinsfile to set environment variable
|
How to get Jenkinsfile name or the path?
|
You need to export your source code as an artifact from the lint pipeline and fetch it on the tests pipeline.Having said that I recommend to have both the tasks as part of the same pipeline, as different stages may be? In my personal experience I've always seen configuring pipelines based on the set of materials that operate together has helped me design my flows more effectively.
|
I need to create a pipeline to do unit and integrations tests and it will executed after lint pipeline on GO CD.I've created a pipeline having as a material the previous pipeline (lint) but and the code is not available to the test pipeline. The test pipeline is automatically started when lint pipeline is successfully finished.I have a git repository as a material on lint pipeline and it must be delivered to the next pipeline.So I need test pipeline have the git repository from previous pipeline without cloning git again.
|
Running pipeline starting from another pipeline result
|
You need a comma after the first input file:input:
fasta = "Drosophila_melanogaster.BDGP6.22.dna.toplevel.fa",
gtf = "bdgp6/Drosophila_melanogaster.BDGP6.95.gtf"However, the error you are seeing isn't the error I usually get when that comma is missing. I wonder if the space at the beginning of the command is causing the problem. You could also try removing it:shell:
"STAR --runThreadN 4 --runMode genomeGenerate --genomeDir {output} --genomeFastaFiles {input.fasta} --sjdbGTFfile {input.gtf}"
|
I want to generate genome index by using STAR. The bash code works in the terminal but I want to convert it to snakefile.This is the bash code:STAR --runThreadN 4 --runMode genomeGenerate --genomeDir star --genomeFastaFiles Drosophila_melanogaster.BDGP6.22.dna.toplevel.fa --sjdbGTFfile bdgp6/Drosophila_melanogaster.BDGP6.95.gtfAfter running the bash code it generates more then one file in the star directory. One of the files is calledgenomeParameters.txt, I need this file for further use.In snakemake:rule index:
input:
fasta = "Drosophila_melanogaster.BDGP6.22.dna.toplevel.fa"
gtf = "bdgp6/Drosophila_melanogaster.BDGP6.95.gtf"
output:
"star"
shell:
" STAR --runThreadN 4 --runMode genomeGenerate --genomeDir {output} --genomeFastaFiles {input.fasta} --sjdbGTFfile {input.gtf}"The ERROR:SyntaxError in line 10 of /data/storix2/student/Thema11/dme/projectThema11/generateGenomeIndex:
Command must be given as string after the shell keyword. (generateGenomeIndex, line 10)
|
Error when converting bash code to snakefile
|
There's an undocumented API that I use ingocd-janitor.The API works like this"/go/pipelines/value_stream_map/" + pipeline + "/" + version + ".json"pipelineis the name of the pipelineversionis the pipeline counter for which you need the VSM.PS: Removing the.jsonat the end should open the VSM for that pipeline and run in a HTML format.
|
In GoCD pipeline, we can export a pipeline metadata as json or xml. Same way, is it possible to export all the pipelines that belongs to a value stream map?
|
Is there a way to export value stream map in gocd?
|
SMH, 10 minutes after I posted this, I found the answer in this github issue.Instead, of a space delimiter, use a semi-colon and it works. :(Enable to select multiple input files in NuGet restore task #8369
|
Azure Devops Pipeline Task NuGetRestore@1 not accepting a list of solution files for iterationIn building an Azure Pipeline, I have found that some of my solution files, build code that must be pushed to a Nuget feed before the rest of the solution is built. I've written some Powershell to go off and discern this, and feed the list of files back as variables. In a subsequent, task I then try to use the list of solution files as input to the NuGetRestore@1 task and that is failing.variables:
SLNFILELIST: 'a/a.sln b/b.sln'
- task: NuGetRestore@1
displayName: restore slnfilelist
inputs:
solution: "$(SLNFILELIST)"Ideally, the NuGetRestore task above would iterate over both solution files a and b in the variable. However I get this (edited) output instead.Active code page: 65001
##[error]Error: Not found files: D:\a\1\s\a\a.sln D:\a\1\s\b\b.sln
##[error]Packages failed to restore
##[section]Finishing: restore slnfilelistTo some degree I know there is a iteration mechanism in this task since if the solution is set to a value of "***.sln" the task will go and find all solution files in current working directory and then iterate through them so the task has the ability, the question is how is that fed directly into the task?
|
Need to feed explicit list of VS SLN files to nuget restore task
|
python -m pytest --cache-clear -v -x -r a --junit-xml=tests/engine_tests --junit-prefix=measure_tests *.py --deselect Test1\.py --deselect Test2\.py --deselect Test3\.py --deselect Test4\.pyI tried this and it worked for me. Before that You need to installpytestpip install pytestDocumentation will be find by typingpytest --helpunder terminalor somewherehere
|
I want to skip or exclude some certain tests from the build or the pipeline.I am runningnosetests -s -v *which runs all the tests under some specific folder.Suppose there are about30 testsand out of the5I want to skip- To do that I am tryingnosetests -s -v * --exclude-test test_sometest.py test_somemoretest.pyornosetests -s -v * -- test_sometest.py test_somemoretest.pybut both of them not work for me.I am referring from here#!/bin/sh
cd tests/engine_tests/measures
nosetests -s -v * --exclude-test test_sometest1.py test_somemoretest2.py test_sometest3.py test_somemoretest4.pyAny help would be great!!
|
nosetests skip certain tests in python with multiple tests
|
Inspect what themulawdecelement does:Pad Templates:
SINK template: 'sink'
Availability: Always
Capabilities:
audio/x-mulaw
rate: [ 8000, 192000 ]
channels: [ 1, 2 ]
SRC template: 'src'
Availability: Always
Capabilities:
audio/x-raw
format: S16LE
layout: interleaved
rate: [ 8000, 192000 ]
channels: [ 1, 2 ]So basically it decodes Mu Law to PCM. If you want to save the raw Mu Law instead remove themulawdecelement.
|
I have a wave file with these properties.sampling rate = 16000 Hz
encoding = L16
channels = 1
bit resolution = 16I want to make 2 pipelines1) I am throwing this file contents as RTP packets on port=50002) listen to port=500 catch the rtp packets and make an audio file with
following propertiessampling rate = 8000 Hz
encoding = PCMU
channels = 1
bit resolution = 8What I have tried is:
Sender:gst-launch-1.0 filesrc location=/path/to/test_l16.wav ! wavparse ! audioconvert ! audioresample ! mulawenc ! rtppcmupay ! udpsink host=192.168.xxx.xxx port=5000Receiver:gst-launch-1.0 udpsrc port=5000 ! "application/x-rtp,media=(string)audio, clock-rate=(int)8000, encoding-name=(string)PCMU, channels=(int)1" ! rtppcmudepay ! mulawdec ! filesink location=/path/to/test_pcmu.ulawBut I am getting L16 file at the Test.ulaw and not PCMUAny suggestion?
|
Creating a mulaw audio file from L16
|
write a custom filter and use itvar app = angular.module('myApp', []);
app.controller('myCtrl', function($scope, $filter) {
$scope.textExample ='address sample <br/> <br> text';
$scope.textFilteredInController = $filter('removeBreakTags') ($scope.textExample);
});
app.filter('removeBreakTags', function() {
return function(text) {
return text ? String(text).replace(/<br\s*\/?>/gm, '') : '';
};
});
app.filter('removeHTMLTags', function() {
return function(text) {
return text ? String(text).replace(/<br\s*\/?>/, '') : '';
};
});<script src="https://cdnjs.cloudflare.com/ajax/libs/angular.js/1.6.6/angular.min.js"></script>
<div ng-app="myApp" ng-controller="myCtrl">
<p>{{ textExample | removeHTMLTags }}</p>
<p>{{ textExample | removeBreakTags }} </p>
<p> --- In Controller ---</p>
<p>{{textFilteredInController}}</p>
</div>
|
I have addresses stored in text format which includetags within them to separate Street Address from City and State with a New line between the two
How can I pipeline/filter the address text to removetags within the text?For Example
This removes spaces
{{Address | EliminateSpaces}}
|
Pipeline/Filter <br/> tags in Text through AngularJs Pipeline
|
You could do something like this to extend the objects fromGet-MailboxRegionalConfigurationwith additional information:Get-Mailbox -ResultSize Unlimited | ForEach-Object {
$name = $_.DisplayName
$_ | Get-MailboxRegionalConfiguration |
Select-Object *,@{n='DisplayName',e={$name}}
}
|
Get-Mailbox usernamegets me for instance the "Displayname" of a user.Get-MailboxRegionalConfigurationgets me some more information.I want to usePS> Get-Mailbox username | Get-MailboxRegionalConfiguration
Identity Language DateFormat TimeFormat TimeZone
-------- -------- ---------- ---------- --------
en-US M/d/yyyy h:mm tt W. Europe Standard Timeand I need also theDisplaynamefrom theGet-Mailbox. Can I do this with pipes?So far I have to use foreach and want to avoid that:$MBs = Get-Mailbox -ResultSize Unlimited
foreach ($MB in $MBs) {
$Name = $MB.DisplayName
$MRC = $MB | Get-MailboxRegionalConfiguration
$Lang = $MRC.Language
$DF = $MRC.DateFormat
$TF = $MRC.TimeFormat
$TZ = $MRC.TimeZone
}
|
How can I pass objects/values trough multiple pipes?
|
To specify an alert definition, you create a JSON file describing the operations that you want to be alerted on.
Following example creates an alert for Run Completion.
Below JSON will help you to create similar for update alert.{
"contentVersion": "1.0.0.0",
"$schema": "http://schema.management.azure.com/schemas/2014-04-01-preview/deploymentTemplate.json#",
"parameters": {},
"resources": [
{
"name": "ADFAlertsSlice",
"type": "microsoft.insights/alertrules",
"apiVersion": "2014-04-01",
"location": "East US",
"properties": {
"name": "ADFAlertsSlice",
"description": "One or more of the data slices for the Azure Data Factory has failed processing.",
"isEnabled": true,
"condition": {
"odata.type": "Microsoft.Azure.Management.Insights.Models.ManagementEventRuleCondition",
"dataSource": {
"odata.type": "Microsoft.Azure.Management.Insights.Models.RuleManagementEventDataSource",
"operationName": "RunFinished",
"status": "Failed",
"subStatus": "FailedExecution"
}
},
"action": {
"odata.type": "Microsoft.Azure.Management.Insights.Models.RuleEmailAction",
"customEmails": [
"@contoso.com"
]
}
}
}
|
is there any way to receive a mail (some kind of alert) when someone creates a new Pipeline in a Specific Data Factory?Something like "user XYZ created a new Pipeline"Thanks for your inputs,
Marcelo
|
Azure - Data Factory - New Pipeline Created
|
You've specified a scriptblock (or a command returned from get-command) as a pipeline element. That's an expression.To use the scriptblock, add a & in front of it.Read-Dataset $dataset `
| & $datasetProcessor `
| Export-Csv $path -NoTypeInformationhere's an example using built-in cmdlets:$sort=get-command sort-object
get-childitem | & $sort -property Length -descending
|
I want to parameterize a Powershell pipeline by inserting a pipeline step from a parameter like in the following simplified example.param (
$path,
$dataset,
$processor
)
$datasetProcessor = Get-Command 'Convert-Noop'
if ( $processor.Keys -contains $dataset.Name ) {
$datasetProcessor = $Processor[$dataset.Name]
}
Read-Dataset $dataset `
| $datasetProcessor `
| Export-Csv $path -NoTypeInformationThis results in the errorExpressions are only allowed as the first element of a pipelineMaybe it will work using Invoke-Expression, but then i would be no longer able to use the debugger.What approach may work?Thanks,
Steffen
|
How to pipeline variable cmdlets in Powershell?
|
List keys = new ArrayList(job.getTriggers().keySet());
for (int i = 0; i < keys.size(); i++) {
Object obj = keys.get(i);
if(obj.getDisplayName().contains("Build periodically")) {
println job.fullName + "," + job.getTriggers().get(keys.get(i)).spec
}
}
|
I need to grab the "Build Triggers -> Build Periodically" values using a groovy script. I need to check if it is enabled and what the Schedule values are but I have no luck while searching through Jenkins' github and the javadoc.jenkins api documentation.
|
Groovy syntax for grabbing build periodically property
|
If those files are generated by the application, they should be re-generated when your GitLab-CI will checkout the application source code, build and execute it.In that case, they will be in the GitLab-CI job workspace.No need to copy them then.
|
Sorry as the title is vague. I have Jenkins and GitLab in two separate docker containers connected by docker network. I am planning to create a CICD pipeline for an application. Although the application's source code in in my GitLab as well as GitLab container, there are some log files and some extra files at many paths in the code. This application is not owned by me and was previously executed on a completely different server. My question is do I need to copy these extra files, log files to GitLab GUI, GitLab container and Jenkins container or I can just copy these to GitLab container. I want to know if my jenkins runs only with code from SCM or will it have access to containers?
|
GitLab in container
|
After 2 hours of grinding, finally got it to work somehow.db.collectionb.aggregate([
{
$unwind: "$products"
},
{
$lookup:{
from:"collectiona",
localField:"A_id",
foreignField:"_id",
as:"colaref"
}
},
{
$unwind: "$colaref"
},
{
$unwind:"$colaref.products"
},
{
$project:{
colaProducts:"$colaref.products",
products:"$products",
idEq:{ $eq:["$colaref,products._id","$products._id"]}
}
},
{
$match:{
idEq: true
}
},
{
$project:{
quantity:{
$subtract:["$colaProducts.quantity","$products.quantity"]
}
}
}
])if I add a match for the CollectionB document ID, then I get somewhat like I wanted.
|
I have a Collection called A, where in there is a product array,
and I have a collection B, which also has a product array, A & B relationship is one to many by _id, for each document in A , there may be multiple documents in B,also B's product array only consists of products from A's Product array.This is collection A{
"_id": "abcdefg",
"products": [
{
"_id": "1",
"_product": "1",
"quantity": 12
},
{
"_id": "2",
"_product": "2",
"quantity": 32
},
{
"_id": "3",
"_product": "3",
"quantity": 12
}
]
}These are two docs in B
Doc 1{
"_id": "<<obj_id>>",
"A_id":"abcdefg",
"products": [
{
"_id": "<<_id>>",
"_product": "1",
"quantity": 6
},
{
"_id": "<<_id>>",
"_product": "2",
"quantity": 16
}
]
}and DOC 2{
"_id": "<<obj_id>>",
"A_id":"abcdefg",
"products": [
{
"_id": "1",
"_product": "1",
"quantity": 6
},
{
"_id": "2",
"_product": "2",
"quantity": 12
},
{
"_id": "3",
"_product": "3",
"quantity": 8
}
]
}Now, I need a pipeline/strategy in which I can get the difference of quantities of A's Products and (sum of all B'products) where ids of products are equal i.e something like this[
0,
4,
4
]
|
Subtract property of a document in a collection to a property of another document in a different collection, and return the difference array
|
Without reimplementing your code sections locally, Here are some differences I notice between yours and mine that is working.JENKINSFILEI don't have a space before the underscore on my @Library lineImmediately after my @Library line I am importing my shared library class that implements the methods I want to call. In your case this would beimport foo.DemoClassMy call to my method is of the form(new DemoClass(config, this)).testVarsInvokeDemoMethod()SHARED LIBRARY CLASSESI don't have#!groovyin any of my groovy classes.My class is public and implements SerializableHopefully one of these difference is the source of why its not getting called.
|
I want to invoke method of src directory from vars directory, which it works in IDE. But it seems not work in Jenkins.1.project structure├── src
│ └── foo
│ └── DemoClass.groovy
└── vars
└── varDemo.groovy2.Content of DemoClass.groovy#!groovy
package foo
class DemoClass {
def testDemoMethod() {
println("src DemoClass testDemoMethod")
}
}3.Content of varDemo.groovy#!groovy
import foo.DemoClass
def testVarsDemo() {
println("vars varDemo.groovy testVarsDemo")
}
def testVarsInvokeDemoMethod() {
println("vars varDemo.groovy testVarsInvokeDemoMethod")
def demoClass = new DemoClass()
demoClass.testDemoMethod()
println("end vars varDemo.groovy testVarsInvokeDemoMethod")
}4.Jenkins pipeline@Library('tools') _
varDemo.testVarsDemo()
varDemo.testVarsInvokeDemoMethod()5.execute result in pipeline> git checkout -f b6176268be99abe300d514e1703ff8a08e3ef8da
Commit message: "test"
> git rev-list --no-walk c1a50961228ca071d43134854548841a056e16c9 # timeout=10
[Pipeline] echo
vars varDemo.groovy testVarsDemo
[Pipeline] echo
vars varDemo.groovy testVarsInvokeDemoMethod
[Pipeline] echo
end vars varDemo.groovy testVarsInvokeDemoMethod
[Pipeline] End of PipelineIt seem likedemoClass.testDemoMethod()not work. Why can't invokedemoClass.testDemoMethod()? If I want to invoke the method insrcdirectory, what should I do? Thank you!
|
Jenkins pipeline shared library can't invoke method in src directory
|
To pass the output of lookup activity to an execute pipeline, you need to define a parameter in execute pipeline, and use '@activity('LookupTableList').output.value' to set the value of the pipeline parameter, referencehttps://learn.microsoft.com/en-us/azure/data-factory/tutorial-bulk-copy-portal#create-pipelinesto see howTriggerCopyexecute pipeline use the output ofLookupTableListlookup activity, which is the exactly same as your scenario.
|
I have a Lookup Activity in a pipeline. Sequential to it i have multiple Execute Pipeline Activities. I want to use that lookup within the execute pipeline activities. Is that possible? Or re-creating the lookup in each the execute pipeline activity is the only option?I'm using ADF v2.
|
Use lookup from one Azure DF Pipeline into another Pipeline
|
We will be able to use parameterezed pipeline (not job) after gitlab 10.8 release. (May,22 2018)https://gitlab.com/gitlab-org/gitlab-ce/issues/44059
|
I need to create job into gitlab pipeline. I want trigger it manually and put into some variables for using into scripts. How can I do it? Can I?
|
Can I create parametrezed job into gitlab pipeline?
|
Pipeline is used to chain sequential data transformation models followed last by the classifier / regressor. Something like first converting the text to numbers using TfidfVectorizer and then training the classifier.pipe = Pipeline([('vectorizer',TfidfVectorizer()),
('classifier', RandomForestClassifier())])For only a single class, no need of Pipeline.Here in your code, its used as a placeholder, so that the parameters can be used by using the'classifier'prefix. And theclassifieritself can be substituted from the params.
|
I just came across this example on Model Grid Selection here:https://chrisalbon.com/machine_learning/model_selection/model_selection_using_grid_search/Question:The example reads# Create a pipeline
pipe = Pipeline([('classifier', RandomForestClassifier())])
# Create space of candidate learning algorithms and their hyperparameters
search_space = [{'classifier': [LogisticRegression()],
'classifier__penalty': ['l1', 'l2'],
'classifier__C': np.logspace(0, 4, 10)},
{'classifier': [RandomForestClassifier()],
'classifier__n_estimators': [10, 100, 1000],
'classifier__max_features': [1, 2, 3]}]lassifier', RandomForestClassifier())])As I understand the code,search_spacecontains the used classifiers and their parameters. However, I don't get what the purpose ofPipelineand why it containsRandomForestClassifier()?Background:
In my desired workflow, I need to train a doc2vec model (gensim), based on 3 different classifiers. Both the model and the classifiers should apply GridSearch to parameters. I like to store the results in a table and save the best model, that is the one with the highest accuracy.
|
GridSearch on Model and Classifiers
|
The problem is that your parallel pipelinestdoutis getting consumed by a "single"stdinfrom|(cd /testfiles; tar xf -)So, you need to "also" parallelize thetar xf -part, a possible solution can be treating that pipeline as a "mini-script", then getting xargs passed arguments with$@:find image -maxdepth 2 -mindepth 2 -type d -print| \
xargs -P 48 sh -c 'tar cf - --files-from $@ | tar -C /testfiles -xf -' --Btw also I'd be careful with the-P 48, start with morefrugalvalues until you find a comfy tradeoff for the I/O impact of above.
|
I'm trying to copy a very large filesystem using a parallel pipeline of tar create/extract jobs usingxargs. I can't seem to figure out the correct syntax.find image -maxdepth 2 -mindepth 2 -type d -print|xargs -P 48 tar cf - --files-from|(cd /testfiles; tar xf -)I get these errors:xargs: tar: terminated by signal 13xargs: tar: terminated by signal 13But if I execute the same command without the-Poption, it runs. It's just single threaded and will take forever to do 50 million files across the 700K subdirectories.The following works, but ist slow:find image -maxdepth 2 -mindepth 2 -type d -print|xargs tar cf - --files-from|(cd /testfiles; tar xf -)So what am I missing?
|
xargs parallel tar pipeline
|
There are two ways to look at it (that I'm aware of):The first is that your CI/CD pipeline builds and deploysyourapplication, so unless you add malware to your application (possibly inadvertently by depending on a compromised version of a library), it won't deploy malware.The second is that you absolutely can add automatic security checking to your pipelines, for example by integrating with a static or dynamic malware scanner. You can make that a stage in your pipeline, somewhere before the deployment, that makes the pipeline halt and fail if a scanner detects malicious code.(Note that some cloud-based malware scanner, such as virustotal, make all uploaded files available to all subscribers of their service, which might not be acceptable in some cases; be sure to read and understand the scanner's Terms of Service before you use it).
|
I do have a CI/CD Pipeline to deploy my spring boot application to PCF. It does have a Job to call a shell script to deploy to the PCF environment. How can i ensure that it doesn't install a malware so that hacker cannot mess it up.
Any ideas/suggestions are welcome.
|
How to secure CI/CD pipeline
|
You need to define$DeviceNameas the first positional parameter for that to work:function Get-Device {
[CmdletBinding()]
Param(
[Parameter(ValueFromPipelineByPropertyName)]
[string]$CustomerName = "*",
[Parameter(Position=0)]
[string]$DeviceName = "*"
)
Process {
# real process iterate on fodlers.
New-Object PSObject -Property @{
CustomerName = $CustomerName;
DeviceName = $DeviceName
}
}
}
|
Considering to functions designed to use values by property name, the second function have his first argument passed by pipeline. Can I use positional parameter for this second function?Example:function Get-Customer {
[CmdletBinding()]
Param(
[string]$CustomerName = "*"
)
Process {
# real process iterate on fodlers.
New-Object PSObject -Property @{
CustomerName = $CustomerName;
}
}
}
function Get-Device {
[CmdletBinding()]
Param(
[Parameter(ValueFromPipelineByPropertyName)]
[string]$CustomerName = "*",
[string]$DeviceName = "*"
)
Process {
# real process iterate on fodlers.
New-Object PSObject -Property @{
CustomerName=$CustomerName;
DeviceName=$DeviceName
}
}
}You can use it like:Get-Customer "John" | Get-Device
Get-Customer "John" | Get-Device -DeviceName "Device 1"But can you do this (actually with provided code it doesn't work)?Get-Customer "John" | Get-Device "Device 1"
|
ValueFromPipelineByPropertyName can I shift function next positional parameters?
|
i suggest use a hash instead a list if need check the key existence.
lists are for add many items and sort by input order, not by key
Hashes are by key, and has the method hexist
|
I have a controller that is being called lots of times (thousands per minute), and I need to log every call without losing the response speed.I have a piece of code as follows:$redis = Redis::connection();
$redis->pipeline(function($pipe) use ($type, $redis)
{
// usual
$pipe->incr($type);
// check unique list
$len = $pipe->lLen($type.'_unique_list');
$list = $pipe->lRange($type.'_unique_list', 0, $len);
if(!in_array($this->uid, $list)) {
$pipe->rPush($type . '_unique_list', $this->uid);
$pipe->incr($type . '_unique');
}
});In the other place I get data from Redis and display them.The problem is, that while I use$pipe->lLenand$pipe->lRangethe numbers won't change (the interesting moment is that neither$typenor$type . '_unique' change).I've tried replacing$lenwithPHP_INT_MAX, but the problem remains the same. I've also tried adding$pipe->exec();in the end, but it didn't help as well.If I replace$pipe->lRangewith$redis->lRange, everything starts working but awfully slowly, because each redis call waits for response.How could I solve this situation?UPD: I found out that$listtaken with$pipereturnsRedisobject, not array. So the question is, how could I check if the key exists in Redis list without retrieving the list itself.
|
PHP Redis pipeline lRange not working correctly
|
That is how v1 works. If your upstream dataset fails, the second will stay in the waiting state until the first dataset has completed successfully.If you are using a schedule, you would want to fix the problem with the first activity and run the failed slice again. If you're working with a one-time pipeline, you have to run the whole pipeline again after fixing the problem.The timeout only works when the processing actually starts, as is written in deData Factory documentation.If the data processing time on a slice exceeds the timeout value, it is canceled, and the system attempts to retry the processing. The number of retries depends on the retry property. When timeout occurs, the status is set to TimedOut.
|
I'm currently using Data Factory V1.I have a pipeline with 2 chained activities:. The first activity is a Copy Activity that extracts a table from SQLDB into a .tsv file in Data Lake Store. The second activity is a Data Lake Analytics U-SQL activity that collects the data in the previously created .tsv file and adds it to an existing table in Data Lake database.Obviously, I only want the second activity to run after the first activity so I used theoutput dataset from the first activityas theinput data to the second activityand it works fine.But, if the first activity fails, the second activity will be stuck at the state "Waiting: Dataset dependencies (The upstream dependencies are not ready)".I have the policy->timeout property set for the second activity but it only seems to work after this activity has started. So, since the activity never starts, it doesn't timeout and it stays stuck.How can I set a timeout for this "waiting" period?Thank you
|
Data Factory waiting timeout for upstream dependencies
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.