Response
stringlengths
8
2k
Instruction
stringlengths
18
2k
Prompt
stringlengths
14
160
The first thing I would check will be the cron settings for other pipelines. If no YAML cron is defined I would go and check the GUI settings.Press editand then triggers.under scheduled you can locate if cron settings are defined on GUI.Then you could also deactivateprtriggers andbranchtriggers for the pipelines that you do not want to get triggered as a second step of troubleshooting.ShareFollowansweredMay 27, 2022 at 8:19GeralexGRGeralexGR3,20766 gold badges2424 silver badges3333 bronze badgesAdd a comment|
I've been working on a cronjob pipeline that runs at x time of the day on the develop branch, but I've been running into an issue that has been problematic.So for example, I have pipelines A, B and C. The pipeline that runs the cronjob should only trigger on pipeline A. Although this does work, it also triggers pipeline B in parallel which is completely unintended and not ideal as that pipeline's functionality is different.Is there a way to configure the YAML file in a way so only pipeline A gets triggered and no other pipelines get triggered in the process?
Azure DevOps pipeline auto-triggers in multiple pipelines unintentionally
The issue was solved by usingpm2 startOrReload current/ecosystem.config.js --only Developmentrather thanpm2 startOrReload current/ecosystem.config.js --only ProductionShareFollowansweredMay 23, 2022 at 17:38kissukissu42.7k1515 gold badges7575 silver badges151151 bronze badgesAdd a comment|
I have a Nuxt SSR app which want to deploy on server. My CI (Buddy) runs pipeline to do it by running bash commands. All of them runs without any error but at the end application can not find files in .nuxt directory. It throws an error 404 not found _nuxt/46f6559.modern.jsEverything looks ok except the file really does not exists on the server. I try it by commandsudo find . -type f -name 46f6559.modern.jsto find it. It seems like main file link to old files which no longer exists in repository. But really dont know what is going on there. The build looks fresh and .nuxt folder is filled by build files.This are the commands in pipeline which works fine.yarn buildThenif [ -d "builds/$BUDDY_EXECUTION_REVISION" ] && [ "$BUDDY_EXECUTION_REFRESH" = "true" ]; then echo "Removing: builds/$BUDDY_EXECUTION_REVISION" rm -rf builds/$BUDDY_EXECUTION_REVISION; fi if [ ! -d "builds/$BUDDY_EXECUTION_REVISION" ]; then echo "Creating: builds/$BUDDY_EXECUTION_REVISION" cp -dR deploy-cache builds/$BUDDY_EXECUTION_REVISION; fiThenecho "Linking current to revision: $BUDDY_EXECUTION_REVISION" rm -f current ln -s builds/$BUDDY_EXECUTION_REVISION currentThenpm2 startOrReload current/ecosystem.config.js --only ProductionEverything ends up with success. Does anybody know what could happened there? My be the last of the commands? If I run it on local machine everything works well. Thanks for any help.
Nuxt SSR application can not find some files generated by pipeline
Not Possible with Classic Release Pipeline.ShareFollowansweredOct 27, 2022 at 5:55Dilip PanwarDilip Panwar111 bronze badgeAdd a comment|
I have a YAML pipeline in Azure DevOps that contains a list of variables which I can choose which on of them to use in runtime.This is my code:And this is how it looks in runtime:My question is how to create this option in a classic Azure pipeline? From past threads I read I saw that there's no option to do that, but maybe something changed lately.
Variables Dropdown menu in classic pipeline Azure DevOps?
In my question the regex was wrong. To resolve my problem i do that I just simply split the evalution in two stepsI check if the source is a merge request event and it is on branch develop or main. In that case i will not execute the pipeline.otherwise i will go to the step 2 that will evaluate the merge request event, but for other branches different from main or developrules: - if: $CI_PIPELINE_SOURCE == 'merge_request_event' && $CI_COMMIT_BRANCH =~ /^develop|^main/ when: never - if: $CI_PIPELINE_SOURCE == 'merge_request_event' changes: - 'yarn.lock'ShareFollowansweredMay 12, 2022 at 9:53dnadna2,15722 gold badges1212 silver badges3535 bronze badgesAdd a comment|
I have a gitlab pipeline and I'm trying to set a rule for a merge request event, i want to fire the rule when i have a merge request and the source branch is different from main and develop. I do that in my job.rules: - if: $CI_PIPELINE_SOURCE == 'merge_request_event' && $CI_MERGE_REQUEST_SOURCE_BRANCH_NAME =~ /^develop|^main/If i just do that it works- if: $CI_PIPELINE_SOURCE == 'merge_request_event'The variable$CI_MERGE_REQUEST_SOURCE_BRANCH_NAMEis available.But it seems that the regex doesn't works or the&&
Gitlab ci rule for merge request event not works as expected
You can go with thisparm_name = "age" value = 33 d = {param_name: value} foo(**d)We can create dictionary with param nam as strings and there value and when we call **d its called unpacking so this will transform to param_name=value while calling the function.ShareFollowansweredMay 7, 2022 at 11:18Deepak TripathiDeepak Tripathi3,19511 gold badge88 silver badges2323 bronze badges1Can you upvote and accept the answer if you like it ?–Deepak TripathiMay 7, 2022 at 12:57Add a comment|
Thank you for your time reading this question.I want to use an existing function through a PIPELINE. I want to give the name of the parameter into that pipeline as well as its value.How can I pass the name of the parameter into a that function?Like:parm_name = "age" value = 33 foo (parm_name=value)instead we do:params = {'age': [11, 23, 33]} pipeline(foo, params)So, I have to define my pipeline function. But I don't know how to pass parameter names inside a string.Thank you dear. Happy coding.
Pass parameter name to a function in python
Using a single pipeline, you can not ensure task C would runAfter A and B, unless both A and B were executed,see docssomewhat hinting that "when" condition evaluating to false would block execution for tasks that "runAfter" that conditional task.What you could try is to add some "finally" block, that would either trigger another Pipeline, or include your task C.ShareFollowansweredMay 8, 2022 at 8:30SYNSYN4,72411 gold badge2121 silver badges2323 bronze badgesAdd a comment|
I have a pipeline with following tasks.- name: A taskRef: name: buildah-secondary-tag-task runAfter: - maven-prepare-package when: - input: "$(params.type)" operator: in values: ["app"] - name: B taskRef: name: buildah-secondary-tag-task runAfter: - maven-prepare-package when: - input: "$(params.courtType)" operator: in values: ["bapp"] - name: C taskRef: name: buildah-secondary-tag-task runAfter: - ATask A and B are different depending on the params.type A or B is executed but Task C has to run after A or B has been exectued. How can I specify in the that condition in runAfter for task C
Tekton pipeline conditional run
This depends on the model. Mostly, no. Also, you'd need to specify what exactly you mean by "score"; there are many metrics that might be saved somewhere.kNNmodels store the training data in a private attribute_fit_X(source), so you could recreate a score from that (though you're not really saving much work here).HistGradientBoostingmodels store an iteration-wise training and validation score (docs).GradientBoostingmodels similarly save loss functions at each iteration.Cross-validation models likeLogisticRegressionCVsave the cross-validation scores for each hyperparameter value. Those are significantly different from a training score though.ShareFollowansweredApr 29, 2022 at 15:19Ben ReinigerBen Reiniger11.4k33 gold badges1818 silver badges3232 bronze badgesAdd a comment|
I was wondering if a saved model in a Pipeline object contains the score of the data with which it has been trained. If so, how to get that score without having to put the data back in?
Does a Pipeline object store the score of the data it trained with?
Defining a new class which inherits from the desired transformer with a modifiedfitmethod should do the trick e.g.class StandardScaleWULD(StandardScaler): def __init__(self): super().__init__() self.unlabelled_data = UNLABELLED_TRAITS def fit(self, X, y=None, sample_weight=None): all_data = pd.concat([X, self.unlabelled_data]) return super().fit(all_data, y, sample_weight)this new transformer can then be used in the pipeline as usual.ShareFolloweditedJun 6, 2022 at 10:32answeredMay 1, 2022 at 15:01A. BollansA. Bollans15822 silver badges1010 bronze badgesAdd a comment|
I'm setting up a machine learning pipeline to classify some data. I have lots of unlabelled data (i.e. target variable is unknown) that I would like to make use of. One of the ways I would like to do this is to use the unlabelled data to fit the transformers in my pipeline. For example, for the variables I am scaling whenStandardScaleris called I want it to fit on the given training data plus the unlabelled data and then transform the training data.For clarity, outside of a pipeline I can implement it like this:all_data = pd.concat([labelled_data, unlabelled_data]) s_scaler = StandardScaler() s_scaler.fit(all_data) scaled_labelled_df = s_scaler.transform(labelled_data)Is there a way of implementing this in the sklearn pipeline? I've had a look at theFunctionTransformermethod but don't understand how I could use it in this case.
Including unlabelled data in sklearn pipeline
The issue mentioned above has been resolved. The part where "DEMO XG-BOOST" run doesn't end has been resolved by selecting "Use emissary executor" option while creating the pipeline.see the snapshotWhen we launch the pipeline with this setting, it solved the issue & we can run complete pipelines now.More details:We took support from GCP & they have mentioned that the issue might have been caused by a recent upgrade of the GKE cluster which removes the docker runtime (https://www.kubeflow.org/docs/components/pipelines/installation/choose-executor/#docker-executor). Namely, the Docker executor is the default workflow executor and depends on docker container runtime, which is deprecated on Kubernetes 1.20+. We were using a GKE cluster whose version was 1.21.6. Hence the issue. So, we used the documentation (https://www.kubeflow.org/docs/components/pipelines/installation/choose-executor/#migrate-to-emissary-executor) & migrated to the Emissary executor (instead of Docker) which has solved our issue.ShareFollowansweredMay 14, 2022 at 18:17sai rsai r122 bronze badgesAdd a comment|
After submitting the "run" using Jupyter notebook, when I go to the Kubeflow pipeline dashboard, I can see my "run" submitted & running but it doesn't end even after few hours & doesn't show any logs either.I have tried running the "DEMO XG-BOOST" but the same issue persists & the "run" doesn't end.Can someone please help me understand if there is any issue with the account settings or any other issue which I'm missing because I have tried to check documentation & other websites but couldn't understand why this occurs.Please note that this "run" was working fine (creating the pipeline flow) on coursera using quicklabs (Covertype classifier)error snapshotUsing below versions: kfp 1.8.12 ; kfp-pipeline-spec 0.1.14; kfp-server-api 1.8.1; Python 3.7.12If there is any other information which I have missed to put above, please let me know so I could share that information that could help us solve this issue.
Kubeflow Pipelines error on GCP - Run doesn't end
You will have to add the parameters in Parent and Child pipeline. In the child pipeline you will have to pass the parent pipeline's parameter valueFor Ex. your parameter name in parent in ParaParent1 and the same parameter in child is ParaChild1. The value of ParaParent 1 should be the actual value and value of child parameter ParaChild1 would be @pipeline().parameters.ParaParent1Please refer to a similar article here:https://learn.microsoft.com/en-us/answers/questions/175485/passing-parameters-in-the-execute-pipeline-activit.htmlShareFollowansweredApr 26, 2022 at 6:00Pratik SomaiyaPratik Somaiya71566 silver badges1919 bronze badgesAdd a comment|
I have Created an ADF pipeline which calls a child pipeline (Via execute pipeline). How can I pass the value of variable(start_date and end_date) to execute child pipeline(calls Databricks notebook ). I am unable to get the variable value inside execute child pipeline(has databricks notebook). Please find the pipeline image as belowand the child pipeline inside execute pipeline is below.
how to pass variable to adf execute pipeline
Your YAML triggers seem fine. You don't need to add the exclude part also since they are different paths. Could you please check the assigned YAML file on the Azure DevOps?ShareFollowansweredApr 11, 2022 at 13:06Kazim Okan AkgülKazim Okan Akgül4377 bronze badgesAdd a comment|
I have a multiproject repo with 2 projects. I have a pipeline in each one with the exclude path defined with the other project. Anytime I push a change on any file, both pipelines are being triggered.. ├── README.md └── project-1 |── azure-pipelines-apis.yml │ └── … └── project-2 |── azure-pipelines-ui.yml └── … └── project-3 └── project-4And here are the pipelines:Project-1:trigger: branches: include: - 'dev' paths: include: - 'project-1/*' exclude: - 'project-2/*' - 'project-3/*' - 'project-4/*' - 'README.md'Project-2:trigger: branches: include: - 'dev' paths: include: - 'project-2/*' exclude: - 'project-1/*' - 'project-3/*' - 'project-4/*' - 'README.md'Anytime I push a change in a file inside project-1, both pipelines run. I tried to define the include and exclude paths inside the triggers menu on Azure Devops Pipelines but didn't work.Could you give me any clue to find what's going on? I've followed the official documentation but I can't find what is happening.Thanks!
Azure pipeline is being triggered with modification in excluded folders
Tokenscan now be used in your scripts for authentication.ShareFollowansweredNov 9, 2022 at 6:04noobienoobie2,50755 gold badges4343 silver badges7272 bronze badgesAdd a comment|
I have a created a private AKS cluster on Azure with a Linux VM that acts as self hosted agent\Bastion. The Linux box can access the AKS cluster via Kubectl. My issue is when I try to run a bash script in the Azure DevOps pipeline, I get permission denied.(The pipeline is using the selfhosted agent above) The Bash script runs a Helm command that get a list of the images that AKS will need. I want to use this list to pull images from an external Docker registry and push them to an internal Azure Container Registry and then use Helm to deploy.Here is the Helm command in the script:helm upgrade --install hosted-node -f helm_config.yaml myapp/hosted-app --dry-run -n dev | grep "image:" | awk "{print $2}" | uniq | sed "s/"//g" | grep "myapp" | sed "s/^.*image: //g"The pipeline flow is like this:Get list of images neededPull images from a private Docker repoPush images to ACRRun Helm (it will be configured to use images in the ACR)How to I give the build agent the proper creds from the pipeline to run the above command.Thanks, Ray
How do I pass credentials to run a script on a Azure hosted agent for a private AKS cluster
Just a final update on this.I just ended up adding the CDK stage in the Develop Build and using the CDK pipeline as a job with then triggers the upstream build.ShareFollowansweredApr 14, 2022 at 13:23TUKTUK111 bronze badgeAdd a comment|
I currently have 2 pipelines in Jenkins. One is a full development build from Declarative: Checkout SCM through to Declarative: Post Actions with everything in between... build war, deploy to server and test etc.I also have an AWS CDK pipeline that build's the backend/frontend and deploys.Is there a way of adding the same deployment stage only from the AWS pipeline to the development pipeline? So I would basically have the development pipeline building with the dev deployment to server followed by the CDK deployment as part of same pipeline.Just as a follow up I don't mean add the extra stages/steps from the cdk build but for the develop build to trigger the cdk build once the develop build has success.
Jenkins & AWS - Adding an existing stage from one build to another/ Call another build
Pipeline steps are applied sequentially, so your second transformer is receiving the email lengths rather than the email addresses.You can use aColumnTransformerorFeatureUnionhere. For example,preproc = FeatureUnion([ ('email_length', email_length1), ('domain_length', domain_length1), ('number_of_vouls', number_of_vouls1), ]) pipe = Pipeline([ ('preproc', preproc), ('classifier', LGBMClassifier()) ])You'll get a new error because of the shape of the returns in your functions, but wrapping those up to numpy arrays and shaping them appears to work:def email_length(email) -> np.array: return np.array([len(e.split('@')[0]) for e in email]).reshape(-1, 1)ShareFollowansweredApr 5, 2022 at 15:16Ben ReinigerBen Reiniger11.4k33 gold badges1818 silver badges3232 bronze badgesAdd a comment|
I have only one input, which is email of a user and i create many different functions to create features from the email usingFunctionTransformersfromsklearn, exampleX = np.array(['[email protected]', '[email protected]']) y = np.array([True, False]) def email_length(email) -> np.array: return [len(e.split('@')[0]) for e in email] def domain_length(email) -> np.array: return [len(e.split('@')[-1]) for e in email] def number_of_vouls(email) -> np.array: vouls = 'aeiouAEIOU' name = [e.split('@')[0] for e in email] return [sum(1 for char in name if char in vouls) for name in name]after creating the functions i pack it in theFunctionTransformersemail_length1 = FunctionTransformer(email_length) domain_length1 = FunctionTransformer(domain_length) number_of_vouls1 = FunctionTransformer(number_of_vouls)Then i create the Pipelinepipe = Pipeline([ ('email_length', email_length1), ('domain_length', domain_length1), ('number_of_vouls', number_of_vouls1), ('classifier', LGBMClassifier()) ])But when i try to fit the model likepipe.fit(X, y)I haveAttributeError: 'int' object has no attribute 'split'but whenever i dodomain_length(X) Output : [9, 9]
many FunctionTransformer to the same column - sklearn
On windows, try opening System Properties => Environment Variables In the Environment Variables window there should be a variable called Path. Click it and then click Edit Then click New on the right hand side and add the path to the program EXP "C:/User/bin/php.exe"ShareFollowansweredApr 5, 2022 at 13:37Yitz ChalfariYitz Chalfari4366 bronze badgesAdd a comment|
Am on a MAC OS following instructions to setup an application(pipelines), and ran the following commands:curl -o pipelines https://cloud.acquia.com/pipeline-client/download chmod a+x pipelinesTo finalise the instruction, quoting the guide, I need to move the pipelines program to a directory in my PATH.How do I move the pipelines program to a directory in my PATH?Not sure if it helps, but I use zhrc and iterm.
How to move a program to a directory in my PATH
You cannot isolate different steps within a single Dataflow pipeline without implementing custom logic (for example, customDoFn/ParDoimplementations). Some I/O connectors such as BigQuery offer a way to send failed requests to adead-letter queuein some write modes but this might not give what you want. If you want full isolation you should run separate jobs and combine them into a workflow using a orchestration framework such asApache Airflow.ShareFollowansweredApr 5, 2022 at 22:52chamikarachamikara1,93111 gold badge99 silver badges66 bronze badges0Add a comment|
I am trying to load data in Avro format from GCS to Big Query, using a single pipeline. There are 10 tables for instance that I am trying to load, which means 10 parallel jobs in a single pipeline. Now if the 3rd job fails, all the subsequent jobs fail. How can I make the other jobs run independent of the failure/success of one?
Running jobs independent of each other's failure/success in a single dataflow pipeline
This might actually be more of a pure mlflow limitation, might not have anything to do with Kedro. From the docs, it looks like mlflow only allows us to compare a single point(assuming x & y are logged as metrics) for each experiment.ShareFollowansweredApr 4, 2022 at 5:28avan-shavan-sh1122 bronze badgesAdd a comment|
I am new to kedro, and I don't know if I am asking the right question here.Is it possible on kedro mlflow ui to plot x and y lists?I am running kedro pipeline with mlflow. I have catalog.yaml which I log metrics and artifacts.The end goal is:kedro run 1 # generate x1[1,2,3,4] and y1=[1,2,2,2] these numbers are just exampleskedro run 2 # generate x2[1,2,3,4] and y2=[3,1,2,1] these numbers are just exampleskedro run 3 # generate x2[1,2,3,4] and y2=[1,3,3,3] these numbers are just examplesthen kedro mlflow uiselect run1, run2, and run3 then click compare.on scatter plot ---> able to select x1, x2, and x3 and for y axis able to select y1, y2, and y3then I should be able to see plot with three lines.Something like this:thank you for your help.
How to plot on kedro mlflow ui x1=array/list/dict and y1=array/list/dict?
Just off the top of my head, could you try assigning the command to a variablevariable=$(curl -s -u 'username:password' https://artifactorypro.jfrog.com/artifactory/abc/com/1.0-SNAPSHOT/ | grep jar |cut -d '"' -f2 | tail -5)And then simply send the variable as a parameter?ShareFollowansweredApr 3, 2022 at 6:33M BM B3,08222 gold badges1515 silver badges2020 bronze badges2This would result top 5 snapshots and user could pick any one depending on need. This solution might not work. My motto is to pass the output as parameters for jenkins pipeline job.–KumarApr 3, 2022 at 12:07I guess I don't fully understand what you're trying to achieve here. Could you give me an example of what the result of this bash command would look like, and an example of what you want passed in as a parameter for you Jenkins pipeline?–M BApr 4, 2022 at 4:15Add a comment|
Im looking for help to get the list of jar's in an artifactory using shell command (something like below) :curl -s -u 'username:password'https://artifactorypro.jfrog.com/artifactory/abc/com/1.0-SNAPSHOT/| grep jar |cut -d '"' -f2 | tail -5 .How do I return the output as parameters to jenkins job ?If its not good idea , help me with options to pass the snapshot id's using jfrog api or groovy script to list the snapshots . I see multiple plugins supporting groovy but not shell.
Dynamically get parameters from script for jenkins job
Issue resolved by updating latest version, I tried to use azurm version 2.0.ShareFollowansweredMar 25, 2022 at 11:25Ajay KrishnanAjay Krishnan122 bronze badgesAdd a comment|
When I try to update latest terraform version terraform plan trying to replace storage account resource how to prevent that?Tried life_cycle block
Terraform plan trying to replace existing storage account resource
The only thing I can notice: Wouldnt you need to iterate over thecontentof pbifiles? Like so:for pbifile in $pbifilesShareFollowansweredMar 29, 2022 at 13:52VeryDogeWowVeryDogeWow15199 bronze badgesAdd a comment|
I have a job for my pipelinedeploy-test: stage: test only: - /^B[Ii]-.*$/ script: - > pbifiles=$(git diff --name-only HEAD HEAD~1 -- '***.pbix') for pbifile in pbifiles do echo "Publishin $pbifile to BI-Test...." python UploadPBITest.py --files $pbifile donebut everytime i get this error:$ pbifiles=$(git diff --name-only HEAD HEAD~1 -- '***.pbix') for pbifile in pbifiles; do echo "Publishin $pbifile to BI-Test...." python UploadPBITest.py --files $pbifile done /bin/sh: eval: line 142: syntax error: unexpected "do" Cleaning up project directory and file based variables 00:00 ERROR: Job failed: exit code 2
gitlab for loop failed
version_check:main: stage: main only: - merge_request script: - echo ${CI_MERGE_REQUEST_SOURCE_BRANCH_NAME} - echo ${CI_MERGE_REQUEST_TARGET_BRANCH_NAME}the output is:$ echo ${CI_MERGE_REQUEST_SOURCE_BRANCH_NAME} ALXX3-2663 $ echo ${CI_MERGE_REQUEST_TARGET_BRANCH_NAME} masterShareFollowansweredMar 16, 2022 at 14:41a11eksandara11eksandar17522 silver badges1111 bronze badges1As it’s currently written, your answer is unclear. Pleaseeditto add additional details that will help others understand how this addresses the question asked. You can find more information on how to write good answersin the help center.–CommunityBotMar 17, 2022 at 2:02Add a comment|
I have a stage in my ci-cd pipeline:version_check:main: stage: main script: - echo CI_MERGE_REQUEST_SOURCE_BRANCH_NAME=$CI_MERGE_REQUEST_SOURCE_BRANCH_NAME - echo CI_MERGE_REQUEST_TARGET_BRANCH_NAME=$CI_MERGE_REQUEST_TARGET_BRANCH_NAMEand the output in CI log is:$ echo CI_MERGE_REQUEST_SOURCE_BRANCH_NAME=$CI_MERGE_REQUEST_SOURCE_BRANCH_NAME CI_MERGE_REQUEST_SOURCE_BRANCH_NAME= $ echo CI_MERGE_REQUEST_TARGET_BRANCH_NAME=$CI_MERGE_REQUEST_TARGET_BRANCH_NAME CI_MERGE_REQUEST_TARGET_BRANCH_NAME=how can I get source and destination branch name in proper way?
Find a source and target branch name
Make sure you copy the artifacts in the job that creates them.Each job (and thus every stage) runs on an agent and the job's directories are cleaned at the start of each job.So both your build jobs must publish their own artifact and use a unique name. And then you don't need the last stage.If you want a single artifact with both files, you need to either build both artifacts in a single job, or have the publish job download the 2 artifacts and then create a new, single, artifact with both iOS and Android packages.ShareFollowansweredMar 14, 2022 at 15:25jessehouwingjessehouwing110k2222 gold badges264264 silver badges358358 bronze badgesAdd a comment|
I've been trying to publish a flutter artefact for a while, but keep getting the same error. A lot of the StackOverflow solutions seem to work for .NET, but not in my case. I'm using Flutter Tasks extension to build.Thanks for the help. Here's the solution if someone needs it for later.https://gist.github.com/OriginalMHV/bca27623c32dc04a311f6dff837e2d42stages: - stage: Build jobs: - job: iOSBuild pool: vmImage: 'macOS-latest' steps: - task: FlutterInstall@0 inputs: channel: 'stable' version: 'latest' - task: FlutterBuild@0 inputs: target: ios projectDirectory: $(projectDirectory) iosCodesign: false iosTargetPlatform: device - job: AndroidBuild pool: vmImage: 'macOS-latest' steps: - task: FlutterInstall@0 inputs: channel: 'stable' version: 'latest' - task: FlutterBuild@0 inputs: target: apk projectDirectory: $(projectDirectory) - stage: CopyAndPublishArtifact jobs: - job: CopyArtifactFiles steps: - task: CopyFiles@2 inputs: SourceFolder: $(Build.SourcesDirectory) TargetFolder: $(Build.ArtifactStagingDirectory) - job: PublishArtifact steps: - task: PublishBuildArtifacts@1 inputs: PathtoPublish: $(Build.ArtifactStagingDirectory) ArtifactName: drop
Azure DevOps Pipeline - Flutter - Directory '/home/vsts/work/1/a' is empty. Nothing will be added to build artifact 'drop'
@Math12, I have encountered this same issue recently and the way I got around it, is to wrap the RandomUnderSampler() in a custom function which is then further transformed by a FunctionTransformer.I have reworded your code to look this way and it worked.Below is a snippet of the code samplefrom imblearn.under_sampling import RandomUnderSampler from imblearn.pipeline import Pipeline sel = SelectKBest(k='all',score_func=chi2) preprocessor = ColumnTransformer(transformers=[('num', numeric_transformer, numeric_cols)]) def Data_Preprocessing_3(df): # fit random under sampler on the train data rus = RandomUnderSampler(sampling_strategy=0.2) df = rus.fit_resample(df) return df # in a separate code line outside the above function, transform the function #with a FunctionTransformer under = FunctionTransformer(Data_Preprocessing_3) #implement your pipeline as done initially final_pipe = Pipeline(steps=[('sample',under),('preprocessor', preprocessor),('var',VarianceThreshold()),('sel',sel),('clf', model)])ShareFolloweditedOct 1, 2022 at 15:05answeredSep 26, 2022 at 7:58Prince EfuePrince Efue2644 bronze badges1Alignment of the code is not okay.–KateLatteSep 30, 2022 at 9:22Add a comment|
I have the following pipeline construction:from imblearn.under_sampling import RandomUnderSampler from imblearn.pipeline import Pipeline sel = SelectKBest(k='all',score_func=chi2) under = RandomUnderSampler(sampling_strategy=0.2) preprocessor = ColumnTransformer(transformers=[('num', numeric_transformer, numeric_cols)]) final_pipe = Pipeline(steps=[('sample',under),('preprocessor', preprocessor),('var',VarianceThreshold()),('sel',sel),('clf', model)])however i get the following error:TypeError: All intermediate steps of the chain should be estimators that implement fit and transform or fit_resample (but not both) or be a string 'passthrough' '<class 'sklearn.compose._column_transformer.make_column_selector'>' (type <class 'type'>) doesn't)I don't understand what I am doing wrong? Can anybody help ?
How to use imblearn undersampler in pipeline?
I did the same thing as you in my pipeline and it's working as expected, nothing is being repeated. I'm using 1.3 version of Parameter Separator PluginShareFollowansweredJul 19, 2023 at 8:21Vukasin VujadinovicVukasin Vujadinovic11this looks like a comment–sidharth vijayakumarJul 27, 2023 at 8:01Add a comment|
I am using declarative syntax "Jenkinsfile" pipeline syntax and want to separate my parameters into groups. I found articles that show I can use the separator plugin with something like:String sectionHeaderStyleCss = ' color: white; background: green; font-family: Roboto, sans-serif !important; padding: 5px; text-align: center; ' String separatorStyleCss = ' border: 0; border-bottom: 1px dashed #ccc; background: #999; ' pipeline { parameters { separator( name: "Group_1", sectionHeader: "Foo Params", separatorStyle: separatorStyleCss, sectionHeaderStyle: sectionHeaderStyleCss ) string( name: 'FooStuff', defaultValue: 'Foo', description: 'Foo Stuff', ) separator( name: "Group_2", sectionHeader: "Bar Params", separatorStyle: separatorStyleCss, sectionHeaderStyle: sectionHeaderStyleCss ) string( name: 'BarStuff', defaultValue: 'Bar', description: 'Bar Stuff' ) } }when I open 'Build with Parameters' in Jenkins the first time its fine I see the layout I expect with :+----- Foo Params -----+ FooStuff: Foo +----- Bar Params -----+ BarStuff: Barbut if I open 'Build with Parameters again... it seems like the separators multiply like Mickey Mouse brooms and now I have:+----- Foo Params -----+ FooStuff: Foo +----- Bar Params -----+ BarStuff: Bar +----- Foo Params -----+ +----- Bar Params -----+Does anyone know why my parameters are multiplying each time I run?
Jenkins: separator parameter with declarative syntax pipeline (multibranch pipeline) job
Okay so thanks to @sytech's tip, I needed to enter an HTTP URL because an SSH URL won't work since I'm using password authentication to access my repository.ShareFollowansweredMar 9, 2022 at 9:24RymcodeRymcode17922 silver badges1111 bronze badgesAdd a comment|
As suggested in the very first step when creating a project pipeline on Buddy.works, I've input all required data using "Private Git server". Yet, I get an error "Failed to import " with no further details.I couldn't find any answers searching through the net.Can someone help ?
Buddy.works CI: Failed to import project from private GitLab
You need to install multiple instances of the agent on your build agent. One agent only runs 1 job at a time. But you can just install as many copies of the agent on the same server as you want, just extract the agent to a new folder and register it.ShareFolloweditedMar 4, 2022 at 9:12answeredMar 3, 2022 at 21:17jessehouwingjessehouwing110k2222 gold badges264264 silver badges358358 bronze badges5Okay, but in this case, how to execute the whole pipeline run on one Agent? I have a multistage pipeline, one stage needs built code from another, and I'm not using artifacts. I just setcheckout: nonein jobs.–MichalMar 4, 2022 at 7:28That isn't how Azure Pipelines works. If you're using jobs and stages you should really be using artifacts.–jessehouwingMar 4, 2022 at 7:30If you always run the jobs on the same server, you could fiddle around with the agent demands and use something like the agents hostnamelearn.microsoft.com/en-us/azure/devops/pipelines/process/…, but each agent will have its own working directory and each job will be executed in a subfolder underneath that. So you won't end up in the same work folder with the build output in it.–jessehouwingMar 4, 2022 at 7:34but it will use a lot of space, even if I'll be usingartifacts Type: file path. I basically need just code from the very last stage, and I need to store it on file share.–MichalMar 4, 2022 at 7:45Yes. That's true, but it it what it is.–jessehouwingMar 4, 2022 at 7:49Add a comment|
I encountered an error which I was fighting for a few days already, without success. I have a multistage pipeline written for Azure DevOps and Self-Hosted agent, is it possible to run multiple concurrent runs, for different branches on a different workspace?I mean, I have queued runs for: dev, dev2, master, etc., and I wanna run three concurrent runs in separate workspaces for them.
Running multiple jobs for pipeline on Azure DevOps Self-Hosted Agent
move this is as TASK inside your Job test, or whatever is running the test. Then cd into the location where your allure bat file is from there just run allure generate result-directory -o report-directory --cleanyour publish task needs to happen right after. You can get the generated results from the into the desired location of the publishShareFollowansweredMar 31, 2022 at 18:38STICH6669STICH66691Add a comment|
we are usingAllure Test Reportplugin for this purpose. However this fails with below errorCan somebody help us find a way to overcome this issue or any other approach to publish the Allure generated artifact in ADO pipeline
can't publish allure report in ADO pipeline
You may just use video-sink property of playbin:gst-launch-1.0 playbin uri=https://www.freedesktop.org/software/gstreamer-sdk/data/media/sintel_trailer-480p.webm video-sink=xvimagesink gst-launch-1.0 playbin uri=https://www.freedesktop.org/software/gstreamer-sdk/data/media/sintel_trailer-480p.webm video-sink=nv3dsinkShareFollowansweredFeb 17, 2022 at 11:39SeBSeB1,50488 silver badges1818 bronze badges1Nice to see it helped. If this solved your case, please set this topic as answered.–SeBFeb 18, 2022 at 11:48Add a comment|
Assuming that there are 2 kinds of streams, one is only video stream (without audio), the other is video with audio. We know that playbin with a uri can play them all even if we dont know what kind of stream we get, but is there any pipelines that use xvimagesink or nv3dsink (not autovideosink etc.) that can receive both (with or without audio) cause we dont know whether the video stream is with audio or not?For instance, if the video stream is with audio, we play video with audio, else we play video without audio.I've triedgst-launch-1.0 rtmpsrc location="$RTMP_SRC" ! \ flvdemux name=demux \ demux.audio ! queue ! decodebin ! autoaudiosink \ demux.video ! queue ! decodebin ! autovideosinkbut if the video is without audio, only one frame would be shown on the screen.https://github.com/matthew1000/gstreamer-cheat-sheet/blob/master/rtmp.mdThis article helps me a lot, but i'm still looking for a excellent general pipeline that works like "playbin" but use xvimagesink or nv3dsink for playing video.
A general gstreamer pipeline use xvimagesink to pull rtmp stream with/without audio
I was just updating my answer to a similar questionhere, maybe you can use that too.You can leverage aPowerShell functionto invoke and rerun the pipeline.Invoke-AzDataFactoryV2Pipeline (Az.DataFactory) | Microsoft Docs)Further, checkout usage of flags--is-recoveryand--start-from-failureto rerun from a failed point without triggers.ShareFolloweditedFeb 15, 2022 at 10:59answeredFeb 15, 2022 at 10:52KarthikBhyresh-MTKarthikBhyresh-MT4,77622 gold badges55 silver badges1212 bronze badges1Will see if the PowerShell is okay to use while using another function–Matthew Paras CabuhayFeb 15, 2022 at 11:01Add a comment|
I am a Software Engineering Associate and I am pretty new using Microsoft Azure. The client wants me to create a re-run function for the whole pipeline without using triggers. What should I do?
How to rerun an Azure Data Factory without using triggers?
<env name="SYMFONY_DEPRECATIONS_HELPER" value="disabled" />Puting this line inside myphpunit.xmlfile disabled deprecation warnings and the job was successful.ShareFollowansweredFeb 10, 2022 at 12:25jojo5050jojo50507588 bronze badgesAdd a comment|
I have a pipeline that executes phpUnit tests, there's no failures running the tests but there's few tests that are risky or incomplete and there's few deprecation notices which make my pipeline fail. Is there a way to make the job not fail because of these warnings and only fail on phpUnit test failure?
Pipeline job fails because of risky phpUnit tests and depracations
There's not enough detail to be sure, but this error often happens when you read bytes from a file or the network, without decoding into a string first. Where do you get your data from? Did you check that the inputs are decoded intostr? Thetypebuiltin function can be used to check.ShareFollowansweredFeb 3, 2022 at 9:44pletnespletnes43944 silver badges1616 bronze badges3On closer inspection, I think you're feeding numbers intosklearn.feature_extraction.text.CountVectorizer, which actually expects text input.–pletnesFeb 3, 2022 at 9:46The data is from kaggle about NLP, i have done padding and i got that array, now I want to create a model with NB. The array is about numbers, so I can't handle it with CountVectorizer?–Francisco Martínez SerranoFeb 3, 2022 at 9:51You probably don't need it - I'm guessing that (or a similar preprocessing step) already has been completed. Check what the kaggle dataset/doc says thatx_trainin fact represents.–pletnesFeb 3, 2022 at 10:29Add a comment|
I'm trying to work with thisPipelineand fit a Naive Bayes model for NLP but I got similar errors. The code:nb = Pipeline([('vect', CountVectorizer(lowercase=Fals)), ('tfidf', TfidfTransformer()), ('clf', MultinomialNB()), ]) nb.fit(x_train,y_train)My x_train and y_train are arrays, example ofx_trainarray([[ 431, 79, 30, ..., 0, 0, 0], [ 19, 69, 133, ..., 0, 0, 0], [ 360, 2338, 24, ..., 0, 0, 0], ..., [ 249, 2516, 8, ..., 0, 0, 0], [1154, 26, 38, ..., 0, 0, 0], [ 27, 25, 70, ..., 0, 0, 0]])I got thiserror. I was looking for the documentaion but coudn't find anything helpful.TypeError: cannot use a string pattern on a bytes-like object
cannot use a string pattern on a bytes-like object, pipeline python
A coworker found the problem, the new project was set to build in a configuration used only by the pipeline which prevented the problem from showing up locally for anyone. Removing the project from building in said config solved the issue.ShareFollowansweredJan 31, 2022 at 19:55AKFrostAKFrost111As it’s currently written, your answer is unclear. Pleaseeditto add additional details that will help others understand how this addresses the question asked. You can find more information on how to write good answersin the help center.–CommunityBotJan 31, 2022 at 22:27Add a comment|
I've been searching for any existing information on this problem but so far i've turned up nothing. I moved a XAML file from an existing project within the C# solution to a new project i added, and after updating namespaces, the solution compiles locally for me and everyone else who's looked at my changes.However, when i tried to merge my work, the ADO pipeline succeeds at compilation but fails at UWP Appx packaging with the message "##[error]IsenCommonLib\Views\DeviceControl.xaml(97,14): Error MC3074: The tag 'GenericLoader' does not exist in XML namespace 'clr-namespace:HP.Omen.OmenControls.Views;assembly=OmenControls'. Line 97 Position 14.". The XAML control GenericLoader definitelydoesexist in this namespace, and it's used about a dozen other times throughout the codebase with no issues. The only changes made to the XAML file i moved were updating namespaces, and so far neither me nor anyone else who's pulled my branch have been able to find anything wrong. I'm completely stumped at this point.Possibly relevant, i added the new project by copy-pasting an existing project and editing it in order to get all the boilerplate configuration for free instead of having to manually reconfigure everything. I know that's a really good way to miss something and have stuff break, but i've checked everything i've found in threads relating to error MC2074 and i haven't found anything that's not how it should be.
XAML error MC3074 occurring only on pipeline
there is no way to get the variables passsed to the pipeline you trigger.you can save them in other places,for example,redis hashset,mysql table when you triggering pipelines.ShareFollowansweredAug 26, 2022 at 2:32focus zhengfocus zheng35411 gold badge55 silver badges1313 bronze badgesAdd a comment|
I have a pipeline which is taking some input variables like name, description. These variables are configured manually before every pipeline run.I need these specific inputs using Bibucket API. From the following documentationhttps://developer.atlassian.com/cloud/bitbucket/rest/api-group-pipelines/, I don't see any method to get the variables used to build the pipeline.Getting the information for a single pipeline doesn't help. This is the API call for a specific pipelinehttps://api.bitbucket.org/2.0/repositories/{workspace}/{repo_slug}/pipelines/{pipeline_uuid}Does anybody know if there is a way to access these inputs using Bitbucket API?
Is there a way to get the variables for a Bitbucket pipeline using Bitbucket API?
On proposal would be this one:db.collection.aggregate([ { $group: { _id: { agent_id: "$agent_id", campaign: "$campaign", date: { $dateTrunc: { date: "$created_at", unit: "week", timezone: "Europe/Zurich", startOfWeek: "monday" } } }, "Total call": { $sum: "$total_in_call_time" }, Outgoing: { $sum: "$outbound_call" }, Incoming: { $sum: "$iinbound_call" }, "Average Call": { $avg: "$total_in_call_time" }, "Total Time": { $sum: "$total_call" }, "Idle Time": { $sum: "$ideal_time" } } }, { $set: { "Average Call": { $dateToString: { date: { $toDate: { $multiply: ["$Average Call", 1000] } }, format: "%H:%M:%S" } }, "Total Time": { $dateToString: { date: { $toDate: { $multiply: ["$Total Time", 1000] } }, format: "%H:%M:%S" } }, "Idle Time": { $dateToString: { date: { $toDate: { $multiply: ["$Idle Time", 1000] } }, format: "%H:%M:%S" } } } }, { $replaceWith: { $mergeObjects: ["$_id", "$$ROOT"] } }, { $unset: "_id" } ])Note,$dateToString: {format: "%H:%M:%S"}works for periods up to 24 hours.Mongo PlaygroundShareFollowansweredJan 29, 2022 at 10:34Wernfried DomscheitWernfried Domscheit56.7k99 gold badges8383 silver badges117117 bronze badgesAdd a comment|
This is the structure of my collection{"_id":{ "$oid":"61a5f45e7556f5670e50bd25" }, "agent_id":"05046630001", "c_id":null, "agentName":"Testing", "agent_intercom_id":"4554", "campaign":[ "Campaig227" ], "first_login_time":"28-12-2021 10":"55":42 AM, "last_logout_time":"21-01-2022 2":"20":10 PM, "parent_id":4663, "total_call":2, "outbound_call":1, "iinbound_call":1, "average_call_handling_time":56, "logged_in_duration":2, "total_in_call_time":30, "total_break_duration":10, "total_ring_time":2, "available_time":40, "ideal_time":0, "occupancy":0, "inbound_calls_missed":0, "created_at":{ "$date":"2021-11-29T18:30:00.000Z" } }I want to generate monthly result like this:AgentCampaignTotal callOutgoingIncomingAverage CallTotal TimeIdle TimeAgent 1Campaig227148386212:00:1812:46:450:23:57Agent 2Campaig227120586216:00:1816:46:450:23:57and daily report like:AgentDateCampaignTotal callOutgoingIncomingAverage CallTotal TimeIdle TimeAgent 11/1/22Campaig2141044:00:184:46:450:46:26Agent 12/1/22Campaig22415910:00:189:46:450:15:26Agent 21/1/22Campaig1161064:00:184:46:450:46:26Agent 22/1/22Campaig130151510:00:189:46:450:15:26Please note that this is only sample data; the actual figure is different.I tried to do this using aggregate and Pipeline but as I am new to MongoDB so find difficulty in generating query.
How can I generate report from collection on daily, weekly and monthly basis MongoDB?
After a lot of searching and testing, this is my resultimage: androidsdk/android-30 pipelines: default: - step: name: Android Debug Application deployment: Test caches: - gradle script: - echo 'Start Building' - apt-get install make -y # download ndk - wget "https://dl.google.com/android/repository/android-ndk-r14b-linux-x86_64.zip" -O temp.zip - unzip temp.zip -d ~/android_ndk - rm temp.zip - export DIR=~/android_ndk/android-ndk-r14b - echo "ndk.dir=$DIR" >> local.properties - cat local.properties - ./gradlew assembleDebug - echo 'Building Finished' artifacts: - app/build/outputs/**I use an unsupported ndk that is why i download it. In case you want a supported ndk you can easily type:- sdkmanager <ndk;version>It will get more updates I hope so!ShareFollowansweredFeb 3, 2022 at 9:35panos pavlatospanos pavlatos1344 bronze badgesAdd a comment|
I am very new to this and probably don't understand some things that i should, so bare with me!I am trying to create a build script for building an android apk on Bitbucket pipeline. I use ./gradlew assembleDebug but i miss a lot of things there.In the project i have a custom library which needs ndk in order to be built, but i don't know how to get that ndk in the pipeline. In my studio locally it builds fine! But i don't know how to do that on pipeline.Can someone please explain to me what i need to do?image: androidsdk/android-30 pipelines: default: - step: name: Android Debug Application deployment: Test caches: - gradle script: - echo 'Start Building' - ./gradlew assembleDebug - echo 'Building Finished' artifacts: - app/build/outputs/**The above is what i have now!Please help a friendly noobCoder here. Many thanks in advance!
Build Script android apk for Bitbucket Pipiline
omg, I feel so silly. The Gitlab documentation says that Protected variables will only be available on Protected branches.... The branch I was on was not protected, it was a random branch I had. The solution was to make the variables unprotected.ShareFollowansweredJan 19, 2022 at 16:22Android DevAndroid Dev42566 silver badges2020 bronze badgesAdd a comment|
I haven't found anything useful online, I am also fairly new to the Gitlab pipeline.Here is what I have:cache: key: "$CI_BUILD_REF_NAME" paths: - .gradle/ image: xxx_irrelevant before_script: - git submodule update --init --recursive - export GRADLE_USER_HOME=`pwd`/.gradle - export SSL_CERT_DIR="/usr/local/etc/openssl/certs" - make ui-tests-clean stages: - build - deploy - scan xxx_scan: stage: scan before_script: - echo "running xxx scan" image: some_image_not_relevant tags: - nite variables: APIKEY: $APIKEY USERKEY: $USERKEY WSS_URL: "xxx_some_url_not_relevant" PRODUCTNAME: $PRODUCT_TOKEN PROJECTNAME: $CI_PROJECT_NAME script: not_relevant_this_is_working only: - mainbut on Gitlab I have those variables defined, and I know it's not retrieving them because when I hardcode in the APIKEY, the security scan works. But when I have what I have below, it says "Bad Org Token" which means its not able to get the APIKEY. Is there something I am doing wrong?
Gitlab-ci.yml file in Android Studio not retrieving user Predefined variables saved in the CI/CD Variables settings?
This Can be done through branch policy "Automatically included reviewers" option.ShareFollowansweredApr 5, 2022 at 6:33Anupam MaitiAnupam Maiti23511 gold badge22 silver badges1010 bronze badgesAdd a comment|
I am currently using Nukeeper in my Azure DevOps pipeline to automatically update my packages. It works fine and automatically creates a Pull Request when the pipeline is run. However, the Pull Requests do not have any required/optional reviewers assigned. I would like to automatically assignOptional Reviewerswith Specific names to the PR.I have looked into the Nukeeper configurations athttps://nukeeper.com/basics/configuration/but could not find any options to achieve the above.Below is my Yaml content:trigger: none schedules: - cron: "0 3 * * 0" displayName: Weekly Sunday update branches: include: - master always: true pool: CICDBuildPool-VS2019 steps: - task: NuKeeper@0 displayName: NuKeeper Updates inputs: arguments: --change Minor --branchnameprefix "NewUpdates/" --consolidateDoes anyone know if it is feasible to automatically assign specific optional reviewers via the Nukeeper pipeline?
NuKeeper - how to add reviewers for pull request?
So, i found mistake, I must write some params for application/x-rtpSource:gst-launch-1.0 v4l2src device=/dev/video0 ! video/x-raw ! jpegenc ! jpegparse ! rtpjpegpay ! udpsink host=10.116.0.110 port=5602Receive:gst-launch-1.0 udpsrc port=5602 buffer-size=90000 ! application/x-rtp, encoding-name=JPEG,payload=26,clock-rate=90000 ! rtpjpegdepay ! jpegparse ! queue ! jpegdec ! videoconvert ! xvimagesinkShareFollowansweredJan 17, 2022 at 10:19Vladyslav LevchukVladyslav Levchuk1Add a comment|
I try to make jpeg stream, using webcam. So, my pipelines:Source:gst-launch-1.0 v4l2src device=/dev/video0 ! video/x-raw, width=640, height=480 ! jpegenc ! jpegparse ! rtpjpegpay ! udpsink host=127.0.0.1 port=5602Receive:gst-launch-1.0 udpsrc port=5602 buffer-size=90000 ! application/x-rtp ! rtpjpegdepay ! jpegparse ! queue ! jpegdec ! videoconvert ! xvimagesinkSource pipeline starts normally, but when I start Receive pipeline, i get error. Error:Setting pipeline to PAUSED ... Pipeline is live and does not need PREROLL ... Setting pipeline to PLAYING ... New clock: GstSystemClock ERROR: from element /GstPipeline:pipeline0/GstCapsFilter:capsfilter0: Filter caps do not completely specify the output format Additional debug info: gstcapsfilter.c(453): gst_capsfilter_prepare_buf (): /GstPipeline:pipeline0/GstCapsFilter:capsfilter0: Output caps are unfixed: application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)JPEG; application/x-rtp, media=(string)video, payload=(int)26, clock-rate=(int)90000 Execution ended after 0:00:00.001054576 Setting pipeline to PAUSED ... Setting pipeline to READY ... Setting pipeline to NULL ... Freeing pipeline ...I'm sure I created the Receive pipeline incorrectly, but I can't figure out how to build it correctly. I will be very grateful for your help!
Gstreamer. Can`t make jpeg stream from webcam
After doing in-depth research and consulting with my colleagues. I have found the Release history cannot be recovered after we change the name of the Release pipeline.Thus, we have decided to move ahead with the lost history and be more careful in the future.ShareFolloweditedJan 20, 2022 at 2:21answeredJan 20, 2022 at 2:13Tanveer RajpalTanveer Rajpal2855 bronze badgesAdd a comment|
I was making some changes to the Release Pipelines, in which we divided 1 pipeline (Dev, PreProd, and Prod) into three separate release pipelines. After the change, we lost the Release history in all pipelines.Now I am trying to "Revert pipeline" by going to history. It appears, Azure DevOps is confused with two different stages as one. Thus, when I try to save it, below error appears.Release pipeline does not support YAML editing, so I am unsure if this is a bug in Azure DevOps, or am I missing something?
VS402872: Duplicate release pipeline stages " " found. Specify a different name and try again
In order to download/copy the files from Sharepoint you will have to pass the filenames and you cannot use wildcards naming. Hence you will have to follow the below series of steps to achieve it.Web1 – Get the access token from SPO.Web2 – Get the list of filesFrom SPO folder ForEach1 – Loop the list of file namesCopy1 – Copy Data with HTTP connector as sourceFor details steps, please refer to this article by an MSFT engineer -https://techcommunity.microsoft.com/t5/azure-data-factory-blog/sharepoint-online-multiple-files-folder-copy-with-http-connector/ba-p/2480426ShareFollowansweredFeb 1, 2022 at 8:08Kranthi PakalaKranthi Pakala1,31655 silver badges1010 bronze badgesAdd a comment|
I have an http connection to a sharepoint db and I would like to import a file without having to specify the file name. Is it possible?Currently I use a dataset that has parametrized the url to this @concat('dburl/_api/web/GetFileByServerRelativeUrl(',dataset().filename, ')/$value')filename is a parameter that i have definedIn my pipeline I have the copy activity get the token and the appsecret and I just specify in the filename parameter the name. I tried using the symbol * and everything else but it fails.
ADF Sharepoint API (HTTP) Conncection Wildcard File
Probably the data is correct, but the column order has changed, and so your re-framing it has put the wrong column labels on. TryDF_Z = pd.DataFrame(z, p.get_feature_names_out())ShareFollowansweredJan 11, 2022 at 5:11Ben ReinigerBen Reiniger11.4k33 gold badges1818 silver badges3232 bronze badgesAdd a comment|
I tried this code in order to impute missing values on my column (any strategy)import numpy as np import pandas as pd from sklearn.impute import SimpleImputer from sklearn.compose import ColumnTransformer X=pd.dataframe('namefile.csv') li=['feature1'] # X['feature1'].Value_Counts of feature1: # 0.00 7269 # 1.00 1745 # nan 683 # 2.00 607 # 3.00 520 # 4.00 146 # 5.00 31 # 6.00 6 p=ColumnTransformer(remainder='passthrough',transformers=[('simp',SimpleImputer(),li)]) z=p.fit_transform(X) DF_Z=pd.DataFrame(z,X.columns) #Distrbution Checking # DF_Z['feature1'].Value_Counts of feature1: # 4.00 7269 # 3.00 3137 # 5.00 2170 # 2.00 403 # 0.00 235 # 1.00 45I don't understand why the transformer corrupts the completion of values. And I do not understand why values ​​that were not missing were changed.
Column transformer corrupts column
If you create a branch on Jira (or any other git-tool, like GitHub, GitLab, BitBucket, Azure…) you must first announce yourlocalrepository that there is a new branch. And you can do this withgit fetch. This command you must also execute if another team member create a branch and you want this branch locally on your computer.But you can also create a branch locally withgit checkout -b <branch-name>on the command line or in PyCharm as describedhere. And after this, you can push thisnewbranch to the remote (Jira or whatever). This way, one step can be saved!ShareFolloweditedJan 8, 2022 at 7:16answeredJan 7, 2022 at 23:19SwissCodeMenSwissCodeMen4,54199 gold badges2626 silver badges3535 bronze badgesAdd a comment|
I cloned a GitLab project in Python that has usually 2 branches:mainanddevelop. Every time someone adds a new feature, they create a new branch from a ticket in Jira, work on it, and this gets eventually merged todevelop.I made some changes in the project, locally on my computer using PyCharm. I created a new branch using Jira, let's call itticket-11, it is momentarily identical todevelop. I want the changes to appear inticket-11. If I use the drop-down menu in PyCharm and selectGit->Push..., this branch doesn't appear there. And I got an email saying that pipeline failed (I saw that the same thing appears for other branches with other tickets too).How can I simply push the changes from command line, to that specific branch?
Pushing changes to a certain branch created with Jira
Try writing-i test ./pom.xmlas thescriptkeyword uses the repository as the current directory.On another note it will be better to use adocker-compose.ymlfile to build and run a docker command in Gitlab CI.ShareFollowansweredJan 5, 2022 at 17:32navidanindyanavidanindya15811 silver badge66 bronze badges1With -i test ./pom.xml I got the same error message, in this case it is necessary ti achieve this with gitlab-ci.yml file–rasilvapJan 5, 2022 at 17:47Add a comment|
I have the next configuration in my gitlab proyect:cont_eval: variables: DOCKER_TLS_CERTDIR: "/certs" stage: cont_eval image: docker:stable services: - docker:19.03.12-dind script: - echo ${CI_PROJECT_DIR} - docker build -t gs . - docker image ls - docker run -v /var/someContainer:/var/someContainer -v $CI_PROJECT_DIR:/company/reports -v /var/run/docker.sock:/var/run/docker.sock -e URL -e USERNAME -e PASSWORD -e SCANNER_IMAGE -e REGISTRY_URL -e REGISTRY_USER -e REGISTRY_PASSWORD -e LICENSE company/gitlab-nexus-iq-pipeline /company/evaluate -i test $CI_PROJECT_DIR/pom.xml artifacts: name: "policy-eval" paths: - $CI_PROJECT_DIR/$CI_PROJECT_NAME-report.htmlI'm getting an error referencing the file:-i test $CI_PROJECT_DIR/pom.xmlI got the error message:builds/<user>/gitlabproject/pom.xml' does not exist.This is my project structure:Any ideas of how can I reference this file inside my gitlab configuration?
How to reference local files in a Gitlab pipeline (File does not exist error message)
It helped me when I moved my project to somewhere else, like the desktop, I guess the path is too long to be able to create this directory.ShareFollowansweredSep 8, 2022 at 21:23Ahmed MousaAhmed Mousa111 silver badge11 bronze badgeAdd a comment|
I am following Building a TFX Pipeline Locally (https://www.tensorflow.org/tfx/guide/build_local_pipeline) on ubuntu 21.04. I am only running the CsvExampleGen component and I am getting the following error:ERROR:absl:Failed to make stateful working dir: ./my_pipeline_output/CsvExampleGen/.system/stateful_working_dir/2022-01-05T11:04:16.463569 Traceback (most recent call last):........ File "/home/mc/anaconda3/envs/tfx_linux/lib/python3.8/site-packages/tensorflow/python/lib/io/file_io.py", line 514, in recursive_create_dir_v2 _pywrap_file_io.RecursivelyCreateDir(compat.path_to_bytes(path)) tensorflow.python.framework.errors_impl.UnknownError: ./my_pipeline_output/CsvExampleGen/.system/stateful_working_dir/2022-01-05T11:04:16.463569; Protocol errorDo any suggestions, please? Thanks
local TFX pipeline run create ERROR Failed to make stateful working dir ; Protocol error
There are a few different ways to do this:Using the UI follow the instructions in the documentation forcopying a single source table.Utilize the table snapshot function as described here:https://cloud.google.com/bigquery/docs/table-snapshots-createCreate a view to in the scratch dataset that points to the original.The benefit to the snapshot is that it would be that it stores less data on disc than a full copy, however in both scenarios one and two above the data would become stale unless you put some kind of scheduled query or process behind it to refresh. The view method would allow you to have the data be up to date as of the execution, this could present permissions issues so you may also want to look into authorized views.ShareFollowansweredJan 3, 2022 at 11:51Daniel ZagalesDaniel Zagales2,98622 gold badges66 silver badges1818 bronze badgesAdd a comment|
I am trying to move the data from Financial Schema tables to Looker_Scratch Schema tables.
How can I move data from BigQuery to same database in BigQuery but different schema?
Is there a way to include and execute same directory in one pipeline?The folder is the default git checkout location for Azure pipelines.Checkout pathThere no way to include and execute the directory.But, you could use the copy task to move and delete the folder out of the folder if needed:Copy Files taskDelete Files taskShareFollowansweredDec 31, 2021 at 7:01Leo LiuLeo Liu73.9k1010 gold badges119119 silver badges143143 bronze badges3Hmm, i will try, will update if something happen :)–kahaboomDec 31, 2021 at 9:52@kahaboom, Sure, looking forward to your reply.–Leo LiuJan 4, 2022 at 2:54thanks, it helped sorry for late response–kahaboomJan 20, 2022 at 15:18Add a comment|
I have one azure pipeline with two different stages.The problem is, i have to include one directory for one stage and exclude the same directory for another stage. I need only one pipeline and I am trying to solve it it somehow. Is there any ways of solving this ? Maybe some other technics ?Thanks in response.
Is there a way to include and execute same directory in one pipeline?
If you have a private build pool (on-premise agents registered on an Azure Devops Pool) lets call thembuild pool serversand those servers have network connectivity with your servers on which you perform your deployment, then you can create a powershell or bash script depending on your operating system to check the curl result.The actual powershell script will run on your build agent (the machine which has network connectivity with your deployment servers)Powershell example:$result = (Invoke-WebRequest -Uri https://www.google.com).StatusCode; Write-Host $resultIf webpage exist, then you will get a 200 result else another HTTP code.ShareFollowansweredJan 3, 2022 at 10:26GeralexGRGeralexGR3,20766 gold badges2424 silver badges3333 bronze badges1Thanks GeralexGR, I only have deploy agents, I will try to run the powershell via deploy. Thanks.–44A-RPJan 5, 2022 at 10:46Add a comment|
I'm doing an stage in azure pipeline after deploying. I would like to run a script to get the HTTP 200 return from the application that was deployed.Deploy runs on servers on premise. And the URL I want to test is only accessible on the internal network. There are some smoke web tests on the marketplace, but they only test URLs that are accessible on the internet.it is possible to run a script as below, being executed by the installed onpremise agent. The agent on the internal network would be able to access the URL to be testedresult=$(curl -s -o /dev/null -w “%{http_code}” www.bing.com/) if [ “$result” == “200” ] then echo “OK” else echo $result exit 1 fi
Azure Pipeline - script to test URL in on premise environment
This seems to be very similar to what you have.https://issues.jenkins.io/browse/JENKINS-65790"This issue was resolved by disabling the rbenv plugin."ShareFollowansweredDec 24, 2021 at 20:01Jimmy K.Jimmy K.1Add a comment|
I accidentally pressed something and now when I try to create a new project in Jenkins everything is aligned to the middle.. how can I align everything back to the left? I tried to restart Jenkins, I even updated it to a newer version but nothing fixed it
How to align text to left on Jenkins
Revisiting Jenkins Docs:-I found that two params needed to be true in order to get status of triggered job to affect the pipelinepropagate : boolean (optional) If enabled (default state), then the result of this step is that of the downstream build (e.g., success, unstable, failure, not built, or aborted). If disabled, then this step succeeds even if the downstream build is unstable, failed, etc.; use the result property of the return value as needed. wait : boolean (optional) If true, the pipeline will wait for the result of the build step before jumping to the next step. Defaults to true.My final Code i.e.(SCM Checkout JOB):-build job: 'FrontEnd_Checkout', wait: true, propagate: true, parameters: [string(name: 'BRANCH_NAME', value: "${env.frontend_branch}")]ShareFollowansweredNov 19, 2022 at 15:00Baher El NaggarBaher El Naggar31922 silver badges55 bronze badgesAdd a comment|
In one of pipline’s stages, In the log I have a message with build failed , and then a little later i have a successful build message. I want the pipeline to stop when it see a failed build message and not continue, because if it continue, it will see it successful and it will not stop and we will not know that there is a problem in the logBuild Failed messageBuild Success Image
How to exit from the Jenkins pipeline if we have "BUILD FAILED" message
Thecut -f1part of your command determines what "column 1" means by scanning the lines for TAB separators. The unmatched line you're seeing likely just uses a space character instead. I suppose the easiest thing to do would be to just runcutagain but looking for spaces instead of TABs.cat /etc/services | grep -Ev '^#|^$' | cut -f1 | cut -d' ' -f1 | sort -u > uniqueservices.txt && wc -l uniqueservices.txtShareFollowansweredDec 13, 2021 at 17:29RaxiRaxi2,63511 gold badge77 silver badges1111 bronze badges2Ahhhhh, nice one @Raxi that's perfect. I just assumed I couldn't pipe to the same command as previous. Thank you. I don't have enough rep to upvote you... but it's there in kind. :)–silverdaggerDec 13, 2021 at 18:35No worries. And yes, the pipes are handled by the shell really, not the application itself. When you run a single program such asnano /etc/services, ananoprocess is created for which stdin is connected to your terminal (so you can type stuff) and stdout is likewise connected to your terminal so stuff is printed to your screen. when you runcat /etc/services | sort | head -5then along the way 3 processes are created, where stdout ofcatis simply hooked up to stdin forsortandsorts stdout->heads stdin, andhead's stdout is your terminal again.–RaxiDec 13, 2021 at 18:41Add a comment|
I'm almost there... I'm supposed to end up with is 340 unique services. So far, I can only get it down to 341.These are my tasks:Extract all the service names from the file.Sort the names alphabetically removing any duplicates.Remove any blank lines or lines that do not contain letters of the alphabet.Capture the final output to a file named 'uniqueservices.txt'.Count the lines in the file using a conditional command that is only executed if the previous combined commands are successful.This is the command I used:cat /etc/services | grep -Ev '^#|^$' | cut -f1 | sort -u > uniqueservices.txt && wc -l uniqueservices.txtHere's what I should get:This is what I should getThis is what I actually get:What I actually getI'm guessing there are (as always) better ways of doing this but... hey, I'm new to this. So close though!Thanks in advance.S
I can't quite get the results to be fully unique - Linux pipelining
In the end we chose to create files and store the messages there. We pass the file name as an argument to python script and log everything. We then remove all the workspace after the job succeeds.ShareFollowansweredDec 28, 2021 at 15:59AladinAladin49211 gold badge88 silver badges2222 bronze badgesAdd a comment|
I have a jenkins pipeline where I have 2 stages:pipelien { agent {label 'master'} stages ('stage 1') { stage { steps { sh "python3 script.py" //this script returns value {"status": "ok"} } } stage ('stage 2') { // Do something with the response JSON from script.py } } }The issue is that I cannot do it. What I have tries:Pass the result to an environment variable, but for some reason Jenkins didn't recognize it. Maybe I did something wrong here?Playing an parsing stdout of script.py is not an option, because this script prints lot of logsThe only option which is left is to create a file to store that JSON and then to read that file in the next step, but it's ugly.Any ideas?
Return result of python script in Jenkins pipeline
I want to schedule pipeline starting from 7AM only and from Monday till Friday About the cron template:mm HH DD MM DWYour proposal is 0 0 7 ? * MON,TUE,WED,THU,FRI * Template have 5 general points starting from minutes and ending on Days of Week - on your example I see more than 5 points.ShareFollowansweredDec 8, 2021 at 13:09magmichal05magmichal0514Please refer my above updated answer with the required changes.stackoverflow.com/a/70274883/15968720–VenkateshDoddaDec 8, 2021 at 14:56As it’s currently written, your answer is unclear. Pleaseeditto add additional details that will help others understand how this addresses the question asked. You can find more information on how to write good answersin the help center.–CommunityBotDec 8, 2021 at 16:16Maybe I wasn't precise enough. Scheduled pipeline will start on Monday at 7AM and will run once. Next day the same pipeline will start at 7AM again and run once and the same till Friday. In my opinion it should look like that : * 7 * * 1-5 Am I right?–magmichal05Dec 9, 2021 at 12:33Yes, it is right you expression should be0 7 * * 1-5as i already stated above in my answer.stackoverflow.com/a/70274883/15968720–VenkateshDoddaDec 10, 2021 at 12:50Add a comment|
I want to schedule a pipeline to execute it every single minute - only for test purpose:trigger:nonepool: testpr: noneschedules:cron: '* * * * * 'displayName: Monday till Friday starting at 7AMbranches:include:- masteralways: trueWhy this is not working? Ultimately I want to schedule this pipeline starting from 7AM Monday till Friday. Also try to test this and unfortunately it didn't start at all.
Azure DevOps pipelines scheduler - can't Triger pipeline
I did try at my end - When there was an exception in the code - I did see the output of the error in the activity output. But in my case the activity failed like @Alex mentioned.In your case you could check the output of the activity and see whether there is any run error. If there is no runError, then proceed with the next activity.@activity('Notebook2').output.runErrorShareFollowansweredDec 29, 2021 at 15:04Satya VSatya V3,99611 gold badge66 silver badges99 bronze badgesAdd a comment|
I have an ADF pipeline which has around 30 activities that call Databricks Notebooks. The activities are arranged sequentially, that is, one gets executed only after the successful completion of the other.However, at times, even when there is a run time error with a particular notebook, the activity that calls the notebook is not failing, and the next activity is triggered. Ideally, this should not happen.So, I want to keep an additional check on the link condition between the activities. I plan to put a condition on the status of the commands running in the notebook (imagine a notebook has 10 python commands, I want to capture the status of 10th command).Is there a way to configure this? Appreciate ideas. Thank you.
Databricks Notebook Command Status Handshake with ADF
I solved my issue.Turns out I was saving my screenshots to theDebugfolder, and Azure was creating aReleasefolder.Just changed the folder and it worked.ShareFolloweditedDec 2, 2021 at 16:30Peter Csala19.9k1616 gold badges4040 silver badges8282 bronze badgesansweredDec 2, 2021 at 15:57Alan SpindlerAlan Spindler1Add a comment|
I'm uploading a Selenium Webdriver project (C#) on Azure.I implemented logs and screenshots on errors, and for a couple of weeks, they appeared correctly in the Artifacts after the test finishes.Now, for some reason, they don't appear in the Artifact folder anymore (the other artifact files appear correctly)In my local execution, they do work flawlessly on any error I encounter.Is there any setting that could cause this?
Selenium Webdriver in Azure Pipeline - Logs and Screenshots don't appear in Artifacts anymore
outputis a string variable and not a list. (output = "['test1','test2','test3']")Solution: convertoutputto a listoutput = sh(script: "python3 list.py", returnStdout: true ).replace('[','').replace("'",'').replace(']','').split(',') for(int i=0; i < output.size(); i++) { stage(output[i]) { echo output[i] } }ShareFollowansweredDec 1, 2021 at 13:17Brahim Ben AmiraBrahim Ben Amira22122 silver badges99 bronze badgesAdd a comment|
output of list.py is ['test1','test2','test3']when I use list.py in jenkins declarative pipeline to create dynamic stages,output = sh(script: "python3 list.py", returnStdout: true ) for(int i=0; i < output.size(); i++) { stage(output[i]) { echo output[i] } }I get the output as stage 1: [stage 2 : tstage 3 : estage 4: s ....... and so on it splits element by elementBut the actual output should be, stage 1: test1stage 2 : test2stage 3 : test3How to get this output ,how to split to get only the values of the list in jenkins declarative pipeline????
how to split a list into separate values in jenkins declarative pipeline
Please confirm that the JSON definition of the new pipeline is different to the original abalone pipeline.When using the Python SDK, you can create the Pipeline and it has a function.definition()that you can use to check the Pipeline before you runpipeline.upsert()to create/update the Pipeline. If you've changed the code and updated the pipeline throughupsert(), you should see the new Pipeline.I work at AWS and my opinions are my own.ShareFollowansweredFeb 22, 2022 at 22:16Kirit ThadakaKirit Thadaka47722 silver badges66 bronze badgesAdd a comment|
I defined new pipeline code from sagemaker abalone pipeline but I ended up using same pipeline as abalone pipelineenter image description hereWhat should I do? Please help.
Sagemaker pipeline
You should have a look atAirflow. You can use a python operator or bash operator to execute basically all your code separately. For example use a bash operator to update the github code and then use another bash operator to execute it.ShareFollowansweredNov 27, 2021 at 9:47Hans BambelHans Bambel86611 gold badge1010 silver badges2121 bronze badgesAdd a comment|
I have one pipeline build in python that consists in many different steps. Each step is an individual process that can be run standalone. I want each step/process to be in a different repository so each developer can focus in his project. The pipeline just will import all projects/code and run the pipeline. What is the best way to organize the code and import all the individual projects in the main pipeline projects?
Multirepository pipeline in python
You can add a stage for cloning the test branch and then run the build and the test stages in the same tame usingparallel. The following pipeline should work:pipeline { agent any stages { stage ('Clone branchs') { steps { echo 'Cloning cypress-tests' git branch: 'cypress-tests', url: 'https://github.com/...' echo 'Cloning dev ..' git branch: 'dev', url: 'https://github.com/...' } } stage('Build and test') { parallel { stage('build') { steps { echo 'Building..' sh 'npm install' sh 'npm run dev' } } stage('e2e Test') { steps { echo 'Testing..' sh 'cd cypress-e2e' sh 'npm install' sh 'npm run dev' } } } } }Your will have the following pipeline:ShareFollowansweredNov 27, 2021 at 15:10Brahim Ben AmiraBrahim Ben Amira22122 silver badges99 bronze badges1The issue here thatdevproject overwritescypress-testscloned project 🤔–JohnPixNov 30, 2021 at 5:10Add a comment|
I am trying to configure the pipeline to run automated e2e test on each PR to dev branch.For that I am pulling the project, build it and when I want to run my tests I can not do this because when the project runs the pipeline doesn't switch to the second stage.The question is when I build the project in Jenkins and it runs, how to force my test to run?I tried parallel stage execution but it also doesn't work, because my tests start running when the project starts building.My pipeline:pipeline { agent any stages { stage('Build') { steps { echo 'Cloning..' git branch: 'dev', url: 'https://github.com/...' echo 'Building..' sh 'npm install' sh 'npm run dev' } } stage('e2e Test') { steps { echo 'Cloning..' git branch: 'cypress-tests', url: 'https://github.com/...' echo 'Testing..' sh 'cd cypress-e2e' sh 'npm install' sh 'npm run dev' } } } }
Run tests in Jenkins pipeline when the build is running
OK. Using existing semantic of Jenkins I have manage to achieve what I wanted with following snippet:stage('Confirm if production related') { when { beforeInput true expression { params.ENV == 'production'; } } input { message "Should I deploy to PRODUCTION?" ok "Yes, do it!" } steps { script { _ } } }Not bad but not good either.ShareFollowansweredNov 26, 2021 at 15:08aleksanderacaialeksanderacai12611 silver badge33 bronze badgesAdd a comment|
this is very tough to me to understand how Jenkins works. In general when you read documentation and define pipeline, things go smooth. I understand pipelines, stages, steps, scripts. What I don't understand is declaration vs runtime. Especially when it comes to WHEN declaration and evaluating expression. For example:What is the form of expression? Should it return something like: return true; or maybe it should be statement like: trueWhen it gets executed? If I access params.MY_PARAMETER_FROM_INPUT, do WHEN has access to its value picked by user?Is it possible to switch execution between runtime vs pipeline declaration time?Can I ask for stage (input with message box) only if given condition within WHEN is meet and if not, then don't ask for it but run stage anyway?When you use IF from script and when WHEN from stage. Can WHEN be defined else where? Within steps, scripts, pipeline?For example in a stage I've putwhen { expression { params.ENV == 'prod' } } input { message "Really?" ok "Yeah!" }but the expression was ignored and the question was always asked (current understanding is that it should skip stage/abort whole pipeline when ENV input param is different than "prod" value)Any thoughts?
Jenkins and its WHEN statement
I see you have used commands as callnewman run. Instead use:newman run collection.json -e environment_file.json --reporters cli,junit,htmlextra --reporter-junit-export junitReport.xmlIt works for me from Azure Pipelines.ShareFolloweditedJun 23, 2022 at 0:54Jeremy Caney7,3718383 gold badges5252 silver badges8080 bronze badgesansweredJun 22, 2022 at 10:20chinzaachinzaa1Add a comment|
I am trying to run a series of Postman test in my Azure build pipeline but keep getting errors that Newman is not installed, I have checked by going to the exact location and running the Newman commands without any issue. My screenshots show I have implemented them and the errors.
Unable to run Postman Newman command in Azure pipeline
It is possible with setting a docker image at the pipeline stage. You can check this example:node { checkout scm /* * In order to communicate with the MySQL server, this Pipeline explicitly * maps the port (`3306`) to a known port on the host machine. */ docker.image('mysql:5').withRun('-e "MYSQL_ROOT_PASSWORD=my-secret-pw" -p 3306:3306') { c -> /* Wait until mysql service is up */ sh 'while ! mysqladmin ping -h0.0.0.0 --silent; do sleep 1; done' /* Run some tests which require MySQL */ sh 'make check' } }You can checkthis postShareFollowansweredNov 24, 2021 at 10:48kaan bobackaan bobac73744 silver badges88 bronze badgesAdd a comment|
I need to create a pipeline to execute a mysql query and email response the query should be sanitized first, like should be only select and on some particular tables only, if this makes sense???
how to create a pipileine which will query a Mysql DB
It means that JMeter is not installed on the machine where your build is running.Moreover I would say thatNo such file or directoryis ratherLinuxerror message and in Linux filesystem there is no drivec:\So eitherinstall JMeteron the machine where you're running your build or use i.e.JMeter Maven Pluginwhich automatically downloads JMeterShareFollowansweredNov 24, 2021 at 8:09Dmitri TDmitri T12Jmeter is installed on my local machine and I am running this from windows machine gitlab account.–HARINI B KNov 24, 2021 at 8:12Main problem here (gitlab-ci.yml file) is cd (change directory) command is not working ( cd "c:/apache-jmeter/bin" Is giving the error 'no such file or directory even the jmeter folder in present in c drive of my local machine) .–HARINI B KNov 24, 2021 at 8:52Add a comment|
Facing a issue "NO SUCH FILE OR DIRECTORY" In script - cd "PATH/OF/JMETER".Load tests: stage: test script: - cd c:\apache-jmeter-5.4.1\bin - .\jmeter -n -t test.jmx -l testresults.jtl
No such file or directory - gitlab and jmeter CI
After poking around some, I discovered that github repos have special stores for sensitive data. This section can be found:github.com/<user>/<repo>/settings/secrets/actionsand the documentation that goes along with this feature:https://docs.github.com/en/actions/security-guides/encrypted-secretsAs far as generating these id/secrets, unless the resource provider has a testing feature that would allow for public unrestricted access to some test account, the project needs to have their own account with the provider.ShareFollowansweredJan 2, 2022 at 13:54null-point-exceptionalnull-point-exceptional57511 gold badge88 silver badges2828 bronze badgesAdd a comment|
I am working on an open source project that provides library support for OAuth2 requests/protocols. This project includes examples on how to implement the library with a variety of API providers (Google, LinkedIn, Instagram, etc...)Currently these examples exist as standalone text files that have variable declarations for the client id and key/secret:client_id = 'getThisFromOwnerOfApi' client_secret = 'getThisFromOwnerOfApi' ... # build request, send, read response etc...These are made so that the user updates the file manually with their client id/secret. Then runs the file locally to see how the library can be used.I would like to restructure the examples so that they are more functional, maintainable, easier to use. Each example would come with:a run filea test filea config file (?)a readmeUltimately I would like the examples to live in their own directory and have the project pipeline run automated tasks against each example (like the test file) to make sure that the examples are working/up-to-date.My question is, how could I approach the handling of the API keys to run the automated tests?
How to structure automated tests for open source API integration that requires client key/secrets?
I use MongoDB for my data base and my connection string is located in a .env file.If your application is configured dynamically this might be the issue (I'm supposingyou're not committing secrets to version control), make sure to go to yourProject -> Settings -> CI/CD -> Variables, click onAdd Variablesand put the contents of your.envfile in it, make sure to selectFileas the type.Gitlab doesn't allow the name.envas a valid name, so make sure to use some other value, likeENV.stages: - testing Testing: stage: testing image: node:latest services: - mongo:latest before_script: # ENV is exported as a path to your "ENV" file, # this is copying it to a local '.env' file - cp $ENV .env # If you need it, this is a way to export the # environment variables inside your file - source .env; export $(cut -d= -f1 .env); # Debugging the pipelines - ls -lah # Your old commands - npm install --no-optional script: - npm run testYou can add more "debug" scrips to thebefore_scriptorscriptssections (like thelsone), this will help you to eventually catch what's going wrong on your pipelines.Remark: If you don't want to add an entire file to CI, you can add the connection string separately asVariabletypes.ShareFollowansweredNov 21, 2021 at 0:42AristuAristu79933 gold badges1818 silver badges3131 bronze badgesAdd a comment|
I have written tests for my code and they all pass. I use nodejs to make a REST api. I decided to commit everything to a gitlab repository. This all worked. I then added the gitlab-ci.yml file to my project. It currently looks like this:stages: - testing Testing: stage: testing image: node:latest services: - mongo:latest before_script: - npm install --no-optional script: - npm run testIm fairly new to pipelines, and i am not sure whats wrong with it. I use MongoDB for my data base and my connection string is located in a .env file. The tests are written using mocha and chai. When I commit, the pipeline fails. I get the error:ERROR: Job failed: exit code 1When I look further in the error it says:Error: Cannot find module '../controllers/UserController'This is strange because im not getting this error in my code editor (Visual Code), and the file UserController is located in the controllers folder. I feel like the gitlab-ci.yml is missing something, but I cant figure out what it is. Any hints would be appreciated.
Why does my gitlab-ci.yml keep crashing on npm run test, while it works in the code editor?
you have to be aware the rules are evaluated in order, and as soon as one applies the evaluation is stopped.Rules are evaluated when the pipeline is created, and evaluated in order until the first match. When a match is found, the job is either included or excluded from the pipeline, depending on the configuration.https://docs.gitlab.com/ee/ci/yaml/#rulesthis means, if you put- if: '$CI_PIPELINE_SOURCE == "web"'as the first rule, it will be evaluated first. this means if somebody triggers it via web, it does not matter if it is a draft or not.ShareFollowansweredNov 18, 2021 at 8:58Simon SchrottnerSimon Schrottner4,41611 gold badge2525 silver badges3939 bronze badges1Sorry, I forgot to clarify. There are two different "Run pipeline" buttons, one in the "Pipelines" section and another in "Merge requests" as you can see on these screenshots:prnt.sc/20147f1prnt.sc/201481qThe rule- if: '$CI_PIPELINE_SOURCE == "web"'will work only for the former. It won't for the latter as GitLab sees it as amerge_request_event, not theweb. I need to run it in both cases but I can't find a way to trace the button push in the second one.–Alexey PavluninNov 19, 2021 at 6:48Add a comment|
We use an on-prem GitLab server. One of the rules for launching our MR pipeline is its state. It should not beDraftorWIP, as I stated below.rules: - if: $CI_MERGE_REQUEST_TITLE =~ /^WIP/ || $CI_MERGE_REQUEST_TITLE =~ /^Draft/ when: never - if: '$CI_PIPELINE_SOURCE == "merge_request_event"' - if: '$CI_PIPELINE_SOURCE == "web"'But in this form, it does not start in all cases with the error: “No stages/jobs for this pipeline”. Our developers want the MR pipeline to start when you click on the “Run pipeline” button in the “pipelines” section of the current MR. I did not find any indicator for clicking on this button, neither in the webhook nor anywhere else. I tried to use onlywhen: manual,if: '$CI_PIPELINE_SOURCE == "web"and many other rules, but still no luck. Is there a way to make an exception to run the pipeline in this case, but keep it for the rest?
Run MR pipeline on “Run pipeline” button
you can usediffcommand in gitlab-ci.yml script section. look like:script: - diff source.txt target.txt - if [[ $? -ne 0 ]]; then echo 'not equal'; exit 1; else echo 'equal'; fi;ShareFollowansweredNov 18, 2021 at 2:23Mouson ChenMouson Chen1,06688 silver badges1515 bronze badgesAdd a comment|
How can I generate a file in my CI pipeline and check if there are any differences to an existing committed file?My CI pipeline runs on merge requests and has a few jobs that run various scripts.In my dev environment, I have one script that I use to generate a GraphQL SDK and I run this on my command line and commit before I make a merge request.yarn workspace $WORKSPACE run graphql:genHowever sometimes I forget to do this before pushing code up and it can be a bit of a hassle waiting for the pipeline to fail and then making the changes locally and pushing code up again.So my idea is to create a job that runs the GraphQL SDK generator script in my pipeline, which will generate a new SDK and produce an error if the generated file is different to the committed file in the MR. So far I have the script running but that is all.js-run-graphql-gen: interruptible: true # This allows future runs of this job to cancel previous ones. extends: [.js-project-xxxxx.com] stage: build rules: - if: "$DEPLOY_xxxx == 'true'" # Allow Frontend to be explicitly skipped. - if: "$SKIP_FRONTEND == 'true'" when: never # Create a pipeline if there are changes. - changes: - xxx/**/* script: - yarn workspace $WORKSPACE run graphql:genCan anyone please point me in the right direction?Thank you for your help!
Generate File In GitLab Pipeline and Check if Any Differences to Commited File
Your example doesn't make much sense to me, but I think I can show you a way to split your file into chunks usingGNU Parallel. You give no indication of the content of your file nor of its size, but let's say it is a million lines, then I could use the commandseq 1000000to generate output similar to your file, so I'll useseqto synthesise lines below.Ok, let's imagine you have 800 lines in your file - it doesn't matter if you have millions. So we can send 800 lines intoGNU Paralleland tell it to split it into chunks of 200 lines each, then pass each chunk throughwc -lto count the lines:# Generate 800 lines and ask GNU Parallel to send 200 lines to each of multiple processes to count them seq 800 | parallel -N 200 --pipe wc -l 200 200 200 200If the number of lines in the file is not divisible by the lines per job, the last job will get fewer lines. I added the-kparameter tokeepthe output in order so you can see it is the last process that gets fewer lines:seq 800 | parallel -k -N 300 --pipe wc -l 300 300 200TLDR; I am suggesting you run a command like the following - experiment with the value that overloadsredis-clicat OneOrMoreBigFiles | parallel -N 1000 --pipe redis-cli --pipeShareFollowansweredNov 14, 2021 at 16:48Mark SetchellMark Setchell199k3232 gold badges289289 silver badges452452 bronze badgesAdd a comment|
I'm trying to mass insert multiple files of data into Redis because inserting the whole data at once, using just one file, didn't work due to it being too large.I'm using following command to insert one file into Redis:cat data.txt | redis-cli --pipeHow can I insert via one command multiple files at once? I triedcat data.txt data1.txt (...) | redis-cli --pipebut this threw the same "too large" error as the approach with one file.
Mass inserting multiple files into Redis
is your HUB url is working ?http://[your host]:4444Sometimes this error will be thrown when any OS patch upgrade happened on your driver machine. so please restart your docker image.docker-compose -f docker-compose-v3.yml restartAny OS upgrade happened or patch happened, I suggest to restart.Still not working, please use this official yml filehttps://github.com/SeleniumHQ/docker-selenium/blob/trunk/docker-compose-v3.ymlShareFollowansweredNov 11, 2021 at 11:03Jayanth BalaJayanth Bala82011 gold badge66 silver badges1212 bronze badges5Sorry dont think the first option did anything or I might've not done it right.–KyleNov 12, 2021 at 9:06is you hub url working ?–Jayanth BalaNov 12, 2021 at 9:24Sorry I don’t understand that. I’m trying to run the suite through a bit bucket pipeline, not locally from my machine?–KyleNov 13, 2021 at 10:30Is your port right.? We should always use 4444 port. Is that error occurring all the time.?–Jayanth BalaNov 14, 2021 at 2:52@JayanthBala Could you please take a look atstackoverflow.com/questions/73017893/…–SasanJul 18, 2022 at 6:45Add a comment|
I'm hoping someone will offer some help here.I tried following the similar questions asked before but no change in the results.I'm trying to run a VS 2019 .Net 5.0 project in a bitbucket pipeline but I'm getting the following error when I try initialise a new chromeDriver (OpenQA.Selenium.WebDriverException : Cannot start the driver service on http://localhost:34811/)Here is the copy of the yml I'm using (I've tried both the commented and uncommented version but both getting the same result)bitbucket-ymlFurther information about this project is, it's a Selenium/Nunit project in c# running Cucumber feature files.Locally I'm able to run the project in parallel but only getting issues when trying to run in the pipeline.Any advice would greatly be appreciated :)
Cannot start the driver service on http://localhost:34811/
This can help you get the VSID of the user who queue the pipeline:Build.QueuedByIdAnd these are the official documents:https://learn.microsoft.com/en-us/azure/devops/pipelines/build/variables?view=azure-devops&tabs=yamlhttps://learn.microsoft.com/en-us/azure/devops/pipelines/build/variables?view=azure-devops&tabs=yaml#how-are-the-identity-variables-setShareFollowansweredNov 10, 2021 at 8:29Bowman Zhu-MSFTBowman Zhu-MSFT5,89211 gold badge99 silver badges1212 bronze badgesAdd a comment|
Background: I want to create a tag on azure resources created via bicep which is orchestrated via my azure devops pipeline(s). The tag in question I would like is the user who created them I.E the person who ran the release pipeline.I'm not aware of any pre-defined variables that can capture the current ADO user, and I've also tried in PowerShell, however the below snippet only captures the build agent user on the microsoft hosted agent$currentUserTemp=[System.Security.Principal.WindowsIdentity]::GetCurrent().Name Write-Host "##vso[task.setvariable variable=currentUser;]$currentUserTemp"
Capture current user as variable - Azure DevOps
The error you're receiving is literally what it says: you shouldn't useonlywithrulestogether in the same job.Basically the reason is that this could lead into problems due to mixed behavior.From the documentation:rulesreplacesonly/exceptand they can’t be used together in the same job. If you configure one job to use both keywords, the GitLab returns a key may not be used with rules error.ShareFollowansweredOct 21, 2021 at 16:05Daniel Campos OlivaresDaniel Campos Olivares2,43511 gold badge1212 silver badges1717 bronze badgesAdd a comment|
I am trying to execute a job on some pipeline variables. I have used 'rules' in my .gitlab-ci.yml file but getting the error "key may not be used with 'rules': only". How can I do this?build-dev: stage: build only: - master - branches rules: - if: '$CI_COMMIT_BRANCH=="my-featured-branch"' when : never
How to run a job on the basis of pipeline variables in Gitlab?
SonarQube server [$SONAR_HOST_URL] can not be reachedmeansSONAR_HOST_URLis not defined in the CI environment. You want to useRepository settings->Repository variablesto add both host url & token (ensureSecuredcheck box is set for token).P.S. The host url should be publicly accessible. It won't work forlocalhost:9000because the Bitbucket CI has not\ way to connect to the instance of the sonarqube server running on your local dev box.ShareFollowansweredApr 28, 2022 at 6:39Artur BakievArtur Bakiev17911 silver badge55 bronze badgesAdd a comment|
I am trying to integrate the sonarqube to bitbucket pipeline, and have following code there- pipe: sonarsource/sonarqube-scan:1.0.0 variables: SONAR_HOST_URL: ${SONAR_HOST_URL} SONAR_TOKEN: ${SONAR_TOKEN} - pipe: sonarsource/sonarqube-quality-gate:1.0.0 variables: SONAR_TOKEN: ${SONAR_TOKEN}and I am getting errorSonarQube server [$SONAR_HOST_URL] can not be reachedI first tried setting localhost:9000, which is running at my local server, got this error, then I give website url, still getting same error,what should I giveSONAR_HOST_URLAny help, Thanks,
SONAR_HOST_URL not reachable in bitbucket pipeline sonarqube
Concurrency has set to one which means one pipeline will start executing and next the other one, based on your screenshot we can see that after the first initiation, second one got started.We need to understand how these limits apply to your Data Factory pipelinesRefer tomrpaulandrewblog for better understanding on concurrency limits.If your pipeline has a concurrency policy, verify that there are no old pipeline runs in progress.ShareFollowansweredOct 8, 2021 at 12:38SaiKarri-MTSaiKarri-MT1,23511 gold badge33 silver badges88 bronze badges1Good answer. Make sure no old pipelines running is important when test concurrency. I created aconcurrency==1pipeline with await(10min)activity, multiply pipeline runs will queue when manually triggered. And I noticed that a pipeline is marked asstartedthe time being pushed in queue.–stanleyerrorDec 4, 2023 at 3:00Add a comment|
I set the concurrency for my Azure ADF pipeline to 1 to avoid having two pipelines running simultaneously:Concurrency settingNevertheless, the ADF is launching them concurrently:Overlapping pipeline runsAm I doing something wrong ?
Azure Data Factory pipeline concurrency issue
First, you need to find out which job triggered the downstream job.def upstreamJob = currentBuild.rawBuild.getCause(hudson.model.Cause$UpstreamCause)Then it can be found out who triggered the upstream jobdef upstreamJobCause = upstreamJob.getUpstreamRun().getCause(hudson.model.Cause$UserIdCause)From theupstreamJobCauseit is possible to retrieveUserobject and then the email address of that userdef user = User.get(upstreamJobCause.getUserId()) def userMail = user.getProperty(hudson.tasks.Mailer.UserProperty.class).getEmailAddress() println("User mail: " + userMail)You can put this code into yourcatchblock. Note that this code will only work for your use case, otherwise error handling is necessary (e.g. if upstream job gets time-triggered the code will not work).Mailer UserPropertyModel RunModel UserShareFollowansweredSep 25, 2021 at 9:11MelkjotMelkjot51833 silver badges1414 bronze badges2Hi @Melkjot, I tried your code, getting exceptions:Caused: java.io.NotSerializableException: org.jenkinsci.plugins.workflow.support.steps.build.BuildUpstreamCause–Narasinga RaoSep 26, 2021 at 5:05@NarasingaRao it works in my pipeline code. Take a look at this answerstackoverflow.com/a/51638793/10721630and try it out.–MelkjotSep 26, 2021 at 17:20Add a comment|
Upstream Job:pipeline { agent any stages { stage('Hello') { steps { build job: 'test-app', parameters: [string(name: 'dummy', value: "")] } } } }Downstream Job:pipeline { agent any stages { stage('Hello - downstream') { steps { try{ sh '' } catch(e){ emailext body: "Stage failed, Please check $BUILD_URL", subject: 'Build Failed', recipientProviders: [buildUser()] } } } }if downstream job fails, I want to send email to the user who triggered it's upstream job.[buildUser()]is giving errors.
If a build failed in downstream job, how to send email to user who triggered it's upstream job in Jenkins
If your bitbucket-pipeline.yml was exactly as you shared and you posted complete, there's one missing line that needs to have the "pipe" listed there, under "script". As reference, please visit this thread:https://community.atlassian.com/t5/Bitbucket-questions/Pipeline-fails-with-No-such-file-or-directory/qaq-p/1069160I hope it is useful for you.ShareFollowansweredSep 23, 2021 at 15:50Nestor Daniel Ortega PerezNestor Daniel Ortega Perez1,05444 silver badges1010 bronze badgesAdd a comment|
I've read that you can specify in your yml configuration the directory so it does not build in the root but rather in/sites/mywebsitename.The config until the error:image: php:7.4 pipelines: branches: master: - step: name: Deploy to production deployment: production script: - cd sitesThe error:No such file or directoryAny comment or advice is highly appreciated.
cannot connect to directory in bitbucket pipeline
You can use sklearn.pipline.Pipeline. Make a list and pass each step as a tuple. Description here:https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.htmlSaving your pipeline could be a bit difficult though. Please check this question:how to save a scikit-learn pipline with keras regressor inside to disk?ShareFollowansweredSep 17, 2021 at 17:27eh329eh329941010 bronze badgesAdd a comment|
I wrote multiple steps to impute a dataset, and I want to pickle/save these steps so that it can be loaded and used automatically when analyzing a new sample.The steps I did for imputation are:imputer = MissForest() imputed_data = imputer.fit_transform(data) imputed_data = pd.DataFrame(imputed_data, columns=data.columns) #Drop 'id' imputed_data_initial = imputed_data.drop('id', axis = 1) #Get unique values def get_unique_values(col_name): return data[col_name].dropna().unique().tolist() #Find closest distance def find_closest_value(target, unique_values): chosen = unique_values[0] L2 = (target - chosen) ** 2 for value in unique_values: if (target - value) ** 2 < L2: chosen = value L2 = (target - chosen) ** 2 return chosen #Imputation for col_name in columns_name_lst: columns_name_lst = imputed_data.columns row_count = len(imputed_data) unique_values = get_unique_values(col_name) if len(unique_values) < 2000: for i in range(row_count): target = imputed_data.iloc[i][col_name] imputed_data.iloc[i][col_name] = find_closest_value(target, unique_values)I want to pickle all these steps as a whole. What're ways I can do in python? Thanks!
How do I pickle/save my whole procedures of imputation?
Bash parses the command line before theshoptgets executed.A common workaround is to split it across two physical lines, so that the first one gets parsedand executedbefore the second.Minimal demo:bash -c 'shopt -s extglob echo !(app)'The newline is crucial; if you replace it with a semicolon, you get the error you are reporting.If you don't want to or can't do that, perhaps usefindinstead.find /var/www/html \! -path '/var/www/html/app2/*' -deleteShareFolloweditedSep 1, 2021 at 9:35answeredSep 1, 2021 at 9:30tripleeetripleee181k3535 gold badges286286 silver badges332332 bronze badgesAdd a comment|
I am trying to auto deploy the angular build files to server using SHH and Jenkins.In the deployment folder, I have a directory 'app2' which should not be deleted.I wanted to remove the existing files in the target folder (/var/www/html) except the folder 'app2'.But I get the following error while deploying,SSH: EXEC: STDOUT/STDERR from command [cd /var/www/html && shopt -s extglob && rm -rf !(app2) && mv /var/tmp/MyApp/* /var/www/html] ... bash: -c: line 0: syntax error near unexpected token `(' bash: -c: line 0: `cd /var/www/html && shopt -s extglob && rm -rf !(app2) && mv /var/tmp/MyApp/* /var/www/html'I feel that there is something wrong with this statement,shopt -s extglob && rm -rf !(app2)What would be the solution to fix the above issue ?
Jenkins - SSH deployment issue - not able to delete files in target folder except one directory
Initialize-Diskwould follow your requirements as it also supports-PassThru. AsClear-Diskis supposed to:Cleans a disk by removing all partition information and un-initializing it, erasing all data on the disk.UsingInitialize-Diskshould be possible. Second example for it is:Example 2: Initialize a disk using the MBR partition styleInitialize-Disk -Number 1 -PartitionStyle MBRIt doesn't look like you're currently doing more withSet-Disk.ShareFollowansweredAug 30, 2021 at 12:29SethSeth1,2351616 silver badges3636 bronze badges3I already had the same idea. Unfortunately,Initialize-Diskcan only be called on RAW disks. It does not work on already initialized disks.–stackprotectorAug 30, 2021 at 13:58At least for me with a VHD (don't have a second drive available)Get-Disk -Number 1 | Clear-Disk -RemoveData -PassThru | Initialize-Disk -PartitionStyle MBR -PassThru | New-Partition -UseMaximumSize -AssignDriveLetterworks without a problem even after the disk has been initalized (Windows 10 21H1). You're correct that you can use it to initalize a RAW disk but Clear-Disk turns that disk back into a RAW disk (as per the description ofClear-Disk).–SethAug 31, 2021 at 5:351I can confirm thatClear-Diskun-initializes virtual hard disks like you did. Unfortunately,Clear-Diskdoes not un-initialize a physical USB drive (tested with two different USB drives on two independent machines).–stackprotectorAug 31, 2021 at 8:23Add a comment|
I am using the following PowerShell pipeline to reset a USB drive to my needs:Clear-Disk -Number <NUMBER_OF_USB_DRIVE> -RemoveData -RemoveOEM -Confirm:$false -PassThru | New-Partition -UseMaximumSize -AssignDriveLetter -IsActive | Format-Volume -FileSystem FAT32AfterClear-Disk, I'd like to insertSet-Disk -PartitionStyle MBR. The cmdlet accepts input objects from the pipeline, so I can use it and pipe objectstoit. But it does not return any results and does not have a-PassThruparameter, so I can process objectsfromit further in my pipeline. It looks likeSet-Diskcan only be used at the end of a pipeline.Are there other tricks available, so that I can useSet-Diskinsidea pipeline?My current workaround looks like that:Clear-Disk -Number <NUMBER_OF_USB_DRIVE> -RemoveData -RemoveOEM -Confirm:$false -PassThru | %{ Set-Disk -InputObject $_ -PartitionStyle MBR New-Partition -InputObject $_ -UseMaximumSize -AssignDriveLetter -IsActive | Format-Volume -FileSystem FAT32 }But I don't like it much, because technically aForeach-Objectshould not really be necessary to process exactly one object.
How can I use `Set-Disk` inside a PowerShell pipeline?
at this point, there's nothing native to Elasticsearch that would provide this sorry to sayShareFollowansweredAug 23, 2021 at 3:11warkolmwarkolm1,98911 gold badge44 silver badges1212 bronze badgesAdd a comment|
I am aware of the ingest pipeline for ingested documents, or the workaround of marking documents as deleted / moving them to a different index with a rollover policy.But is there a way to directly get notified and react upon deleted documents? (without making changes to the application side)
Is there a way to get notified of deleted documents in ElasticSearch?
You can use Deepstream's officialPython segmentation exampleand modify it for your case of reading and saving JPEG frames.The following pipeline should work:source -> jpegparser -> decoder -> streammux -> fakesink. You can attach your probe saving function directly tofakesinkinstead ofsegcomponent of the original pipeline. Also, for how to create thefakesinkcomponent you can check another Python example onthis lineShareFollowansweredSep 28, 2021 at 14:08VladVinVladVin63366 silver badges1313 bronze badges2Can you share some code, please? I am new to this tech stack! Thanks.–devlOct 3, 2021 at 7:33Can you check this too?stackoverflow.com/questions/69305322/…–devlOct 3, 2021 at 7:36Add a comment|
I want to know if someone can help with a Deepstream model code that takes a video in the source and outputs frames of that particular video in jpg.It would be helpful if you can share the Gstreamer CPP or Python code as well.
How to use Deepstream SDK to take a video and just extract the frames in jpg
Use something like below and put it in a script block.def filename = 'datas.yaml' def data = readYaml file: filename // Change something in the file data.image.tag = applicationVersion sh "rm $filename" writeYaml file: filename, data: dataShareFollowansweredAug 11, 2021 at 11:43Lineesh AntonyLineesh Antony8011 silver badge99 bronze badgesAdd a comment|
I'm learning Jenkinsfile and I'm trying to generate a YML file from the variables but it dosn't work properly.pipeline { agent any parameters{ choice( choices: ['Test1', 'Test2', 'Test3'], name: 'select_parameter' ) string( defaultValue: 'string', name: 'STRING_PARAMETER', trim: true ) def amap = ['select_parameter': ${params.select_parameter}] writeYaml file: 'datas.yaml', data: amap def read = readYaml file: 'datas.yaml' assert read.choice == ${params.select_parameter} } stages { stage('Build') { steps { echo 'Building..' } } stage('Test') { steps { echo 'Test prueba' } } stage('Deploy') { steps { echo 'Deploying....' } } }}Can someone help me? I've read this documentationhttps://www.jenkins.io/doc/pipeline/steps/pipeline-utility-steps/
Generate YML file from Jenkinsfile
You cannot preprocess targets in aColumnTransformerorPipeline(unless you plan on putting them together with the independent variables and then splitting them out later); however, there is theTransformedTargetRegressor(docs) meant for this use-case.ShareFollowansweredJul 28, 2021 at 19:02Ben ReinigerBen Reiniger11.4k33 gold badges1818 silver badges3232 bronze badges2Thanks this solves my problem, did not know this existed!–DerBenutzerJul 30, 2021 at 12:36Please provide a minium code snippet–TestDec 18, 2021 at 15:17Add a comment|
I have problems to preprocess the dataset as a whole with columntransformer - maybe you can help:First I read in my dataset:X_train, X_test, y_train, y_test = train_test_split(df, target, test_size=0.2, random_state=seed)Then I do my preprocessing:preprocessor = ColumnTransformer( transformers= [ ("col_drop", "drop",["col1","col2",]), ('enc_1', BinaryEncoder(), ["Bank"]), ('enc_2', OneHotEncoder(), ["Chair"]), ('log', FunctionTransformer(np.log1p, validate=False), log_features), ('log_p', FunctionTransformer(np.log1p, validate=False), ["target_y]), ('pow', PowerTransformer(method="yeo-johnson"), pow_features) ], remainder='passthrough',n_jobs=-1)And after that I call a pipeline with my preprocessor:pipe.fit_transform(X_train, y_train)This produces the error:A given column is not a column of the dataframeAnd this makes in a way sense, because I use the preprocessor to do a nlog1p function on target_y, which is basically my target feature, which is only present in y_train and y_test. I assume that this causes the error, because the target is not in X_train.Question: Is it possible to preprocess X and y at once or is it mandatory to use another columntransformer/pipeline for my y values? Is there any good solution for this?
Using Column Transformer in Scikit to preprocess train and test data with target variable
In general, your understanding is correct.Pipelineobjects are meant for sequential application of several transformations ofX. From theuser guide:Pipelines only transform the observed data (X).Also have a look at thegloassaryabout the term transform:transformIn a transformer, transforms the input,usually onlyX, into some transformed space (conventionally notated asXt).In case of a regression tasks, there is a specialTransformedTargetRegressorwhich deals with transforming the targetyand can e.g. be used at the end of a pipeline.Other than that, there is no canonical way in controlling transformations ofyin a pipeline.ShareFollowansweredJul 8, 2021 at 8:13afsharovafsharov4,91422 gold badges1010 silver badges2727 bronze badges2Thanks for the reply. So just to be certain, if am to use a LabelEncoder or a CustomTransformer to remove certain instances(X and y), I would have to do it outside a pipeline performing other tasks. @afsharov .–AkDJul 8, 2021 at 11:28@AkD that's right. If you need to drop samples, it should be done before fitting the pipeline. Same goes for encodings ofy.–afsharovJul 8, 2021 at 11:57Add a comment|
My current understanding is that, we cant directly transform/retrievey-labelspassed as(X,y)while using a Pipeline.Thefit_transformat the end returns transformations only on theXpassed andyis only utilized in situations involvingfit(),fit_predict()and such.Is my understanding correct?Also is there a way to transform and retrievey(including when dropping instances using a Custom Transformer) without having to break out of a fully enclosed model training pipeline?
Labels (y) in Scikit-learn Pipeline and Compose Classes
You could useJob Status Check Functionswithdependencies between jobs.For example:jobs: job1: continue-on-error: true # Do your stuff here job2: if: ${{ always() }} # Execute your post run command hereI've created a sample workflow to showsomething similar working hereusingcontinue-on-errorwith this condition.You'll see in that example that even with an error on job2, job3 is still executed afterwards. Thisworkflow runreturns something like this (without the workflow failing and always executing the post run commands on job 3):ShareFollowansweredJun 27, 2021 at 22:43GuiFalourdGuiFalourd18.6k88 gold badges5656 silver badges8080 bronze badges0Add a comment|
Is there a way to execute a command post-run, no matter if the previous status was a success or failed something similar to post and always syntax in jenkinsfileI have triedcontinue-on-error: truebut this will make failed step as passed
How to execute command Post run in github actions?
I decided to do such changes:Build image only when requirements were updated.Mount the script to the docker image in a pipeline.Dev release - when I merge changes to the dev branch.Release - when I merge changes to the master branch.ShareFollowansweredJun 21, 2021 at 17:50tutunaktutunak6199 bronze badgesAdd a comment|
I'm trying to make optimization for the case: I have a repository that contains a python app, configuration files for different environments for this app, and a Docker file for it. One Gitlab pipeline for building an image and many other per configuration files that use containers with this app. This pipelines are run manually. Each merge to dev or master branches GitLab pipeline runs linters/tests and build the docker for this app. After that new image pushed to the Docker registry. I want manual pipelines to have a particular container version. But when I merge my changes to the dev branch or merge the dev branch to master a new version of the docker image will be built. I think my architecture isn't good. I don't understand how to made it's better. I even don't know how to write requests on google or what to read. Can you give me any advice or give direction to read/search etc?
Right orgranization pipelines with docker images
I don't think that is valid since helm will interpret is a string. If I understand your question properly, I think you are looking for theindexfunction:{{- $kindstr := (index .Values "myapp").prop.type }}Keep in mind that.Values.myappshould exist or else it will throw an error since it will try to access a property of anilobject.You can use it in the if statement as well:{{- if eq $kindstr "appx" }} ... {{- end }}ShareFollowansweredJun 13, 2021 at 15:53Joshua RoblesJoshua Robles15111 silver badge88 bronze badgesAdd a comment|
is it possible in help templates to do :{{- $kindstr := printf ".Values.%s.prop.type" "myapp }}the result is valide pipeline pathand then use this in IF command?{{ - if $pipelinestr "appx" }} ... {{- end }}this is not working and not giving any failure eitheri guess the parser see this as string type not pipeline objectcan it be done somehow ?
Helm convert String to pipeline to be valid in IF check
Output variables can be referenced by macro syntax:$(), so it should be:BuildVersion: ${{ variables.BuildVersion }}$(SetBuildNumber.updatedCounter)This will only work if you reference output variable from the same stage and same job. In that case, you could also removeisOutput=trueand simply use$(updatedCounter).If the reference takes place in a different stage or job, then the syntax is a bitmore complex.ShareFollowansweredJun 9, 2021 at 4:58qbikqbik5,71022 gold badges2929 silver badges3434 bronze badges1Yes - so I changed the reference to macro syntax as you describe. I also changed that line in the script file to use Write-Host, "$updatedCounter" as the output, and using double-quotes instead of single-quotes. Now it works!–AndyJun 9, 2021 at 13:53Add a comment|
I have an Azure DevOps Yaml build pipeline. The yaml file calls a PowerShell script, like so:- task: PowerShell@1 name: SetBuildNumber displayName: 'Set Build Number' inputs: scriptName: '$(Build.SourcesDirectory)\BuildScripts\UpdateBuildNumber.ps1'This powershell script is supposed to output the value of a variable. Here's what I have there:Write-Output '##vso[task.setvariable variable=updatedCounter;isOutput=true]$(updatedCounter)'&updatedCounteris a variable in that script that gets set to a number. I want the yaml file that called it to then be able to use that number in a parameter sent to another file. This is what I have for that:- template: ${{ variables.buildtemplate }} parameters: BuildVersion: ${{ variables.BuildVersion }}$[SetBuildNumber.updatedCounter]But what is getting passed to the template is the value of the BuildVersion variable concatenated with the string "$[SetBuildNumber.updatedCount]". So it ends up like this, for example, where the value ofvariables.BuildVersionis "1.2.3.":"1.2.3.$[SetBuildNumber.updatedCounter]"What am I doing wrong?
In yaml how to get value of output variable from powershell script then pass it to another template
Answer for enrichment of the data with geo-based policy:https://discuss.elastic.co/t/geo-polygon-query-inside-ingest-pipeline/274886/9Also note that there is a feature suggestion pending in Elastic Github in order to exclude the geo-shape used for the enrichment, so there will be no need for a second pipeline to delete the field.ShareFollowansweredJun 12, 2021 at 7:49Roy LevyRoy Levy72211 gold badge1212 silver badges2525 bronze badgesAdd a comment|
I want to use the elasticsearch Geo-polygon query (https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-geo-polygon-query.html) or Geo-shape query inside an ingest pipeline and I've been thinking how to do it - as there is no such processor asquery processorand the example for the geo polygon is with this type of query.The reason for me trying to use this tool is to enrich documents which's country location isnull, by checking theirlatandlonfields and determining which country the doc is from.Is there anyway this could be done in an ingest pipeline? Or any other way to determine a position by lat and lon within geo shapes in a pipeline?
Is there a way to use 'Geo-polygon' query inside elasticsearch ingest pipeline?
Turns out we can not use MGET with the pipeline, below is m final solutionfrom rediscluster import RedisCluster def redis_multi_get(rc: RedisCluster, keys: list): pipe = rc.pipeline() [pipe.get(k) for k in keys] return pipe.execute() if __name__ == '__main__': rc = RedisCluster(startup_nodes=[{"host": host, "port": port}], decode_responses=True, skip_full_coverage_check=True) keys = rc.keys(PREFIX + '*') cache_hit = redis_multi_get(rc, keys)ShareFollowansweredJun 29, 2021 at 3:34Abhishek PatilAbhishek Patil1,40344 gold badges3535 silver badges6565 bronze badgesAdd a comment|
I am trying to perform an MGET operation on my Redis with the pipeline to increase performance. I have tried doing MGET in one go as well as as a batch flowfrom rediscluster import RedisCluster ru = RedisCluster(startup_nodes=[{"host": "somecache.aws.com", "port": "7845"}], decode_responses=True, skip_full_coverage_check=True) pipe = ru.pipeline() # pipe.mget(keys) for i in range(0, len(keys), batch_size): temp_list = keys[i:i + batch_size] pipe.mget(temp_list) resp = pipe.execute()So far I am getting the errorraise RedisClusterException("ERROR: Calling pipelined function {0} is blocked when running redis in cluster mode...".format(func.__name__)) rediscluster.exceptions.RedisClusterException: ERROR: Calling pipelined function mget is blocked when running redis in cluster mode...What I want to know is thatDoes RedisCluster pipelined MGET?If not then is there any other lib that I can use to archive this?
RedisCluster MGET with pipeline
Make sure you did not put the files into a wrong path forReportsfolder. You can check your variable for$(System.DefaultWorkingDirectory)variableunder Pipelineorprint it out under pipeline.With all the steps, you should make sure the files are in the right directory.Besides, you could use the variablesystem.debug: trueto check the detailed log about your task.ShareFollowansweredApr 27, 2021 at 9:52Mr QianMr Qian22.2k33 gold badges3434 silver badges4141 bronze badgesAdd a comment|
Try to copy JMeter report to AZ BLOB container.Using Ubunto agent in release pipeline ,as the Azure blob file copy only supported from Win agent i've tried to find some other way to do the copy on a linux agent.Running the following command in Azure CLI task:azcopy copy '$(System.DefaultWorkingDirectory)/Report' 'https://account.blob.core.windows.net/container?sp=racwdl&st=2021-04-22T12:12:49Z&se=2022-06-01T20:12:49Z&spr=https&sv=2020-02-10&sr=c&sig=xxxx' --recursive=trueI want to copy all the content of folder 'Report' to the AZ BLOB recursively.The task is finished successfully but no files at all were copied. Using SAS for the container.Attached the task logs of running.Any ideas?
Trying to copy folder from release pipeline to Azure BLOB ended up with no content copied but task is successfully completed
something: name: "My component" version: ${VERSION_VARIABLE}This did the trick!ShareFollowansweredApr 22, 2021 at 15:30Eli HalychEli Halych59511 gold badge88 silver badges2626 bronze badgesAdd a comment|
I have a GitLab repository. It has a pipeline with multiple stages. The stages deploy everything I need from simple YAML files. I want to be able to use variables in those YAML files that would be the same across all stages.For example, if I have:something: name: "My component" version: 1.0I want to use a variable instead of 1.0 across all stages. For example,something: name: "My component" version: VERSION_VARIABLEI heard that simple YAML files don't have variables the usual way, so what workaround could I make to use version as one value across all deployments?Maybe the version value will be stored in Pipeline variables or repository's environment variables, but the question is how to use them in simple YAML files from which different components get deployed?
Run GitLab CI/CD pipeline with global variables in YAML files
You can try the following rules in your pipeline:stages: - build - deploy build: stage: build script: - echo "run build" rules: - if: '$CI_COMMIT_BRANCH == "develop"' - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && $CI_MERGE_REQUEST_TARGET_BRANCH == "release"' dev_deploy: stage: deploy script: - echo "deploy dev" environment: DEV rules: - if: '$CI_COMMIT_BRANCH == "develop"' qa_release: stage: deploy script: - echo "deploy release" environment: QA rules: - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && $CI_MERGE_REQUEST_TARGET_BRANCH == "release"'thebuildstage is only run when commits are made todevelopand for merge-requests into thereleasebranchthedev_deploystage is only run for commits ondeveloptheqa_releasestage is only run for merge-requests into thereleasebranchShareFollowansweredApr 21, 2021 at 8:55danielnelzdanielnelz4,41044 gold badges2929 silver badges3838 bronze badgesAdd a comment|
I have 3 stages Build,DEV_DEPLOY,QA_DEPLOYI want to run build for both the stages DEV_DEPLOY and QA_DEPLOY. Explained below in a screenshot. For me when i merge to QA only QA_DEPLOY is running build is not running.Requirement - When a developer pushes the code Build and deploy to developement stage should run , when a team lead merge the branch from develop to qa branch again Build and Deploy to QA should run.stages: - build_proj - dev_deploy - qa_release build: stage: build_proj script: - run build dev_deploy: stage: dev_develop environment: DEV only: - develop qa_release: stage: qa_release dependencies: - build_proj environment: QA rules: - if: $CI_PIPELINE_SOURCE == "merge_request_event" - if: $CI_MERGE_REQUEST_SOURCE_BRANCH_NAME == "develop"
Gitlab - Always run build for every stage
We recommend you can use the conditions:condition: eq(variables['Build.SourceBranch'], 'refs/tags/test')And this means if you want the test job to run, then you need to push something to release with test tag. We cannot use the value "not(startsWith(variables['Build.SourceBranch'], 'refs/tags/hotfix'))" On my test, if I push the commit release with hotfix tag, then the test job will be skipped.Update:We can use the Custom conditions in Additional options, and set it as :eq(variables['Build.SourceBranch'], 'refs/tags/test')More details, you can refer this doc:ExpressionsShareFolloweditedApr 28, 2021 at 7:48answeredApr 20, 2021 at 7:06FelixFelix1,12244 silver badges66 bronze badgesAdd a comment|
My Azure DevOps pipeline has 3 jobs. One builds the project for production and the other builds it for testing and the last publish artifacts. When I push to release branch, it triggers all of 3 jobs, but they can take 10-15 minutes to finish. What I'm trying to achieve is exclude testing job if a tag is present on the commit or something like that.Ex. Don't trigger test job if branch tag has "hotfix". Tryed "Run this job with conditions" in job's settings with this value "not(startsWith(variables['Build.SourceBranch'], 'refs/tags/hotfix'))" but if I push something to release with hotfix tag it still runs.Thanks
Azure Devops exclude job if branch tag is present
I can see path with slash\ , /differs./builds/oe/apps/cce/cce-support-apps/dell-ccx-ui-automation\HtmlResults\14_04_2021_10_46_42.htmlIt should be looking something like this/builds/oe/apps/cce/cce-support-apps/dell-ccx-ui-automation/HtmlResults/14_04_2021_10_46_42.htmlShareFollowansweredAug 10, 2021 at 18:07Rathna PrashanthRathna Prashanth1122 bronze badgesAdd a comment|
I am generating the extend report & I have written code to mail that report to respective person. Locally it is working fine but I have to run same through CI CD pipeline in git lab. but getting below errorenter image description hereemail exception ********javax.mail.MessagingException: IOException while sending message; 2056 nested exception is: 2057 java.io.FileNotFoundException: /builds/oe/apps/cce/cce-support-apps/dell-ccx-ui-automation\HtmlResults\14_04_2021_10_46_42.html (No such file or directory)It seems extend report not generating. Please let me know how to generate the extend report through CI CD pipeline in git lab.
on running selenium test through gitlab ci cd pipeline extend report not generating
In theofficial docsforIterator.get_next(), you'll see that anOutOfRangeErrorwhen the end of the sequence is reached,Raisestf.errors.OutOfRangeError: If the end of the iterator has been reached.So, the error is not because ofTextLineDatasetordataset.filter(). You may usedataset.as_numpy_iterator()like,out = list( dataset.as_numpy_iterator() )Or surround thedataset.get_next()method withtry exceptblock,for i in range( seq_length ): try: element = iterator.get_next() except tf.errors.OutOfRangeError: print( "End of sequence reached" ) breakShareFollowansweredApr 12, 2021 at 2:21Shubham PanchalShubham Panchal4,19922 gold badges1212 silver badges3838 bronze badges11Thank you for your answer. But i want filter by length of text between 3 and 15. How do this?–Mehmet Fatih AKCAApr 12, 2021 at 8:18Add a comment|
I want filter as 3 <= length_of_text <=15 but i couldn't this.import tensorflow as tf dataset = tf.data.TextLineDataset("data.txt") def drop_outliers(line): return (3<= tf.size(line) <=15).numpy() dataset = dataset.filter(lambda line: tf.py_function(func = drop_outliers, inp=[line], Tout = tf.bool)) iterator = iter(dataset) print(iterator.get_next())I got "end of sequence" error when run this code.
How to filter Tensorflow TextLineDataset by length of text
echo $? if [ $? eq 1 ]; thenThe second$?would be the result of the previousechocommand: most likely 0, considering theechowould have succeeded.You would need to store the output of the first $? first:cd /u/${DIR} res=\$? if [ \${res} eq 1 ]; then exit 1 fiShareFollowansweredApr 7, 2021 at 6:26VonCVonC1.3m539539 gold badges4.6k4.6k silver badges5.4k5.4k bronze badges2I have tried your answer but still the issue persists cd /u/test- -bash: cd: /u/test-: No such file or directory res=0 from given command as the directory is not available, the result should contain 1 as the code of the command, because this command has failed, but it is returning 0, so I am unable to exit the build on failure.–Hassan Iqbal KhanApr 8, 2021 at 10:58@HassanIqbalKhan Can you addset -xbefore thecd? That way, we can see exactly what is executed and its result.–VonCApr 8, 2021 at 12:35Add a comment|
I am trying to build pipeline using Jenkinsfile to perform some action on remote server and want to get the exit code of the failed commands before moving ahead in other commands. I have tried in the following way but it always retrun 0 instead of 1 even if the command fails to execute or even if there is any exception in the execution.pipeline { agent any stages { stage('stage') { when { expression {env.GIT_BRANCH == 'origin/stage'} } steps { echo 'build ' ansiColor('xterm') { echo 'build ' sh """ssh -tt server_address << EOF cd /u/${DIR} echo \$? if [ \$? eq 1 ]; then exit fi ls -al exit EOF""" } } } } }
How to get the exit code of the failed command from remote ssh using jenkinsfile and parameterized build
just have to remove the lableing and a few brackets, as make_pipeline do it by itself and then i can use the index 3my_pipeline[3]ShareFollowansweredMar 16, 2021 at 18:41akfinakfin4311 silver badge44 bronze badgesAdd a comment|
I am using the make_pipeline function from imblearn, to make some data preparation and modelling. Now i want to use the feature_importance_ method on my model. But as my model is part of my pipeline i cant use this method. So I Want to refer to the model inside of my pipeline. Herefore I modified my pipeline code a bit, to give my pipeline steps specific names. But this doesn't work.My Code:my_pipeline = make_pipeline([( make_column_transformer( (make_pipeline( MinMaxScaler() ), ['column_a','column_b']), remainder="passthrough")), (PCA()), (SMOTE()), ("classifier",RandomForestClassifier())])
How to refer to pipeline steps, to use feature_importance_ on a Pipeline?
You can add existing resources to a existing cloud formation stack. AWS Console > Cloudformation > Open your Stack > Actions > Import existing resources.https://aws.amazon.com/de/blogs/aws/new-import-existing-resources-into-a-cloudformation-stack/Then compare your local cloudformation template with the one on in CloudFormation.ShareFollowansweredMar 2, 2021 at 5:57MattMatt144 bronze badges1Thanks for that, but how do I integrate this with pipeline. Since this for this i need to specify Code property in template file which will be static, but I want my cloudformation template to be generated from pipeline with above samTemplate.yaml–Saurabh KhandelwalMar 4, 2021 at 5:09Add a comment|
I am setting up the pipeline which is using the cloud formation stack by creating a changeset and executing the changeset. But the first time this creates another lambda and does not have a way to update or deploy the existing created lambda.buildspec.ymlversion: 0.1 phases: install: commands: - echo "nothing to do in install phase" pre_build: commands: - mvn clean install build: commands: - aws cloudformation package --template-file samTemplate.yaml --s3-bucket saurabh-lambda-pipeline --output-template-file outputSamTemplate.yaml artifacts: type: zip files: - samTemplate.yaml - outputSamTemplate.yamlsamTemplate.yamlAWSTemplateFormatVersion: '2010-09-09' Transform: 'AWS::Serverless-2016-10-31' Description: CD Lambda Resources: testLambda: Type: AWS::Serverless::Function Properties: FunctionName: testLambda Handler: com.test.handler.calculator::handleRequest Runtime: java8 CodeUri: target/emi-calculator.jar AutoPublishAlias: prod Description: 'Lambda function for CD' MemorySize: 128 Timeout: 30 Events: getAZsAPI: Type: Api Properties: Path: /calculator Method: post Environment: Variables: calculatorType: 30
How can I update my existing lambda(not created by cloudformation) through cloudformation. Setting up the pipeline through cloudformation
No. This can not be solved by redownloading.ShareFollowansweredMay 14, 2021 at 2:35Sean ZhangSean Zhang3111 gold badge11 silver badge44 bronze badgesAdd a comment|
I'm new to digdag. Below is an example workflow to illustrate my question:_export: sh: shell: ["powershell.exe"]_parallel: false+step1: sh>: py "C:/a.py"+step2: sh>: py "C:/b.py"The second task runs right after the first task starts. However, I want the second task to wait for the first task to complete successfully.I modified the first task a.py to just raise ValueError, but the second task still runs right after the first task starts.This is not consistent with my understanding of the digdag documentation. But I dont know what goes wrong with my workflow. Could someone please advise?
digdag shell script tasks complete instantaneously
InOneVsRestClassifier, the attributeestimatoris theunfittedestimator. There are several fitted estimators, one for each class, that live in theestimators_attribute. So[est.feature_importances_ for est in etc_final.steps[1][1].estimators_]will contain the information you want.ShareFollowansweredMar 1, 2021 at 2:20Ben ReinigerBen Reiniger11.4k33 gold badges1818 silver badges3232 bronze badges1Thanks @Ben .. Somehow not working for my code.. However, i just fit the modeletc_final[1].estimator.fit(X_train,y_train)and then applied thisfeature_importance = etc_final[1].estimator.feature_importances_. It took care of it–ShalinMar 2, 2021 at 3:41Add a comment|
I have fitted ExtraTreeClassifers model using Pipelineetc_final.fit(X_train, y_train)Here are the model parametersPipeline(steps=[('scale', StandardScaler(with_mean=False)), ('clf', OneVsRestClassifier(estimator=ExtraTreesClassifier(criterion='entropy', min_samples_leaf=5, min_samples_split=5, n_estimators=200)))])Next, i try to pullfeature_importances_etc_final.steps[1][1].estimator.feature_importances_Getting this errorNotFittedError: This ExtraTreesClassifier instance is not fitted yet. Call 'fit' with appropriate arguments before using this estimator.Can someone help point out what is it that I am doing incorrectly? How can I pullfeature_importances_when I am using pipeline?
ExtraTreesClassifier : When trying to pull feature importances, getting a 'NotFittedError' when the model is already fitted
Finally found something that works:$path = "G:\Downloads\Chrome\test2" $items = (Get-ChildItem -Path $path -File *.mp4 | Where-Object { $_ -Match '^Website\.com_+' } | ForEach-Object{ $_ | Rename-Item -NewName {$_.Name -replace 'Website\.com_','' -replace '\+',' '} -PassThru }) $items | % { Rename-Item -Path $_.FullName -NewName (Get-Culture).TextInfo.ToTitleCase($_.Name) }Thanks to everyone who tried to help me!ShareFollowansweredFeb 23, 2021 at 20:49Francois BelangerFrancois Belanger1333 bronze badgesAdd a comment|
I'm trying to rename a bunch of files using powershell by removing the prefix (which is the same in all files), replacing "+" with a space and setting the remainder to title case. Here's what I have so far:Where-Object { $_ -Match '^Website\.com_+' } | ForEach-Object { $_ | Rename-Item -NewName {$_.Name -replace 'Website\.com_','' -replace '\+',' '}; Rename-Item $_.Fullname (Get-Culture).TextInfo.ToTitleCase($_) }The first rename works, it removes and formats files properly, but then the second rename says the items don't exist, which makes me think I should just then pass them into another foreach loop in another pipe, but I can't seem to make that work either. It seems like having 2 rename-items isn't really working and I tried having the title case with the replace and it doesn't seem to work either.
Rename-Items twice in the same pipeline using powershell