Response
stringlengths
8
2k
Instruction
stringlengths
18
2k
Prompt
stringlengths
14
160
You can use post actions for this topic like below. And use slack api to send messages without installing slack plugin. Checkhttps://api.slack.com/messaging/sending.post { success { script { //slack } } failure { script { //slack } } }ShareFollowansweredFeb 19, 2021 at 19:03Mustafa GülerMustafa Güler92466 silver badges1111 bronze badgesAdd a comment|
In the pipeline on which I work, the 4 successive stages correspond to:building a code package,building a binary package,building an Android applicationbuilding an iOS application.Then the request to the server (JSON) is performed, which contains information about the build, including: name, success / abort / failure and possible warnings. I don't know how to get information about build is aborted or failure? I should write the information about the build on Slack without using the Jenkins plugin. I need some tips on how to do this because all my attempts have been unsuccessful.
Preparing Slack message in pipeline after build stages
I do not see how would you load the pipelines while starting up ES. You can either do it via the API, after the cluster has started, or by loading them withfilebeatitself.For most of the pipelines we use, as they do not change very often after the initial setup, we decided to use a very simple bash script that would iterate through a folder with pipeline JSONs and post them to the API via cURL commands.curl -H "Content-Type: application/json" -XPUT http://${ELASTIC_URL}:9200/_ingest/pipeline/some-pipeline[email protected]For other apps though, we had to create customfilebeatmodules which have the pipelines built in.In order to load pipelines viafilebeat, you would need to create a custom module which already contains the pipeline JSON.See themodule development guidefor more details.Once created, you can run./filebeat setup --pipelines --modules my-custom-moduleto push the pipelines to Elastic.ShareFollowansweredJul 30, 2021 at 13:01RăzvanRăzvan98199 silver badges2020 bronze badgesAdd a comment|
I have written pipeline files for Logstash, but my current client is opposed to using Logstash and wants to ingest Filebeat generated logs directly in Elasticsearch.Fine, if that is really what he wants. But I cannot find a complimentary pipeline file for Elasticsearch. I want to COPY config files into an image with a Dockerfile, then build the stack with Compose. Making a nice deployment pattern for the client going forward.I am using version 7.11 of the stack and I have a good start on the Compose file for Elasticsearch and Kibana and another Compose for Filebeat. What I cannot find a a syntax that allows placing the pipelines into the ES Image.Can someone point me in the right direction?Thanks!
How to configure Elastic Search Ingest pipelines using Dockerfile and/or Docker-Compose?
It's well into the 21st century. Unicode is 30 years old. Use Unicode.ShareFollowansweredFeb 8, 2021 at 12:42a guesta guest47233 silver badges55 bronze badges1I converted all .java with a script to UTF-8. In this way I solved the "unmappable character for encoding UTF-8" problems. I had no other alternative–LuigiBocciaFeb 8, 2021 at 14:17Add a comment|
I have a maven project consisting of two modules. The project locally compiles correctly. Now I've ported this project to gitlab, but I can't get it to compile. There are a number of errors like:exampleclass.java: Balcone115,42] unmappable character for encoding UTF-8In themaven-compiler-pluginof the pomUTF-8encoding has been specified, and the JDK 1.7 For development I use Eclipse, withCp1252encoding (default). But what is the best practice for java projects? do you always have to set UTF-8 on Eclipse? How can I manage to compile on gitlab?Thank you
Build Java Project with UTF-8 on Gitlab
You don't declarestages order, so gitlab pipeline don't know what order are expected.At the beginning of.gitlab-ci.yamlfile add something like this (or whatever order you want):stages: - deploy - test - build # rest of you file...Alternatively you can useneedsto build jobs relation.ShareFollowansweredFeb 5, 2021 at 2:44velmafiavelmafia2655 bronze badges0Add a comment|
I have the problem, that I want to trigger another pipeline (B) in antoher project (B), only when the deploy job in pipeline (A) is finished. But my configuration starts the second pipeline as soon as the deploy job in pipeline (A) starts. How can I do it, that the second pipeline is triggered, only when the deploy job in pipeline (A) in projet (A) is finished?Here is my gitlab-ci.ymlworkflow: rules: - if: '$CI_COMMIT_BRANCH' before_script: - gem install bundler - bundle install pages: stage: deploy script: - bundle exec jekyll build -d public artifacts: paths: - public rules: - if: '$CI_COMMIT_BRANCH == "master"' staging: variables: ENVIRONMENT: staging stage: build trigger: example/example test: stage: test script: - bundle exec jekyll build -d test artifacts: paths: - test rules: - if: '$CI_COMMIT_BRANCH != "master"'
How to trigger pipelines in GitLab CI
Yourworkflow:rulesdo not have an explicit allow for$CI_PIPELINE_SOURCE == "schedule"This is what I use for merge request pipelines:workflow: rules: # Do not start pipeline for WIP/Draft commits - if: $CI_COMMIT_TITLE =~ /^(WIP|Draft)/i when: never # MergeRequest-Pipelines workflow # For merge requests create a pipeline. - if: $CI_MERGE_REQUEST_IID || $CI_PIPELINE_SOURCE == "merge_request_event" # For tags, create a pipeline. - if: $CI_COMMIT_TAG # For default branch create a pipeline (this includes on schedules, pushes, merges, etc.). - if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH # For other pipeline triggers - if: $CI_PIPELINE_SOURCE =~ /^trigger|pipeline|web|api$/ShareFollowansweredFeb 15, 2021 at 1:31virgilwasherevirgilwashere322 bronze badgesAdd a comment|
We would like to have a .gitlab-ci.yml which supports the default CI pipeline and the SAST pipeline only scheduled once a day.lint, build, test-unit (on merge request)test-sast (scheduled once a day)What seems logic but didn't work is this configuration:include: - template: Security/SAST.gitlab-ci.yml - template: Workflows/MergeRequest-Pipelines.gitlab-ci.yml image: node:lts-alpine stages: - lint - build - test lint: stage: lint script: - npm i - npm run lint build: stage: build script: - npm i - npm run build test-unit: stage: test script: - npm i - npm run test:unit test-sast: stage: test script: [ "true" ] rules: - if: $CI_PIPELINE_SOURCE == "schedule" when: always - when: neverThen did some tests using the environment variableSAST_DISABLEDwhich didn't work as well.May be someone has a similiar setup and may help out with a working sample?
GitLab pipeline (.gitlab-ci.yml) for CI and scheduled SAST
The failure of the test methods for your project is not the 'Standard Error'. It just a result report of the test methods for your project, and will not be written to theStandardErrorstream. So, the failure does not affect the final status of the task run.If you want the job fails when the tests fail, as a workaround, you can try to set up a script to check the test result, and write a 'Standard Error' (e.g.Exit 1) to theStandardErrorstream if there is any test fails.ShareFollowansweredJan 29, 2021 at 2:38Bright Ran-MSFTBright Ran-MSFT7,90911 gold badge66 silver badges1515 bronze badgesAdd a comment|
I'm getting "Build Success" in the log of Maven task even though the goal task is failure. How to make the job a failure if the task has errors as below? Thanks.Yaml file:- task: Maven@3 inputs: mavenPomFile: 'pom.xml' mavenOptions: '-Xmx3072m' javaHomeOption: 'JDKVersion' jdkVersionOption: '1.8' jdkArchitectureOption: 'x64' publishJUnitResults: true testResultsFiles: '**/surefire-reports/TEST-*.xml' goals: 'integration-test -DskipIntegrationTests=false -Dmaven.test.failure.ignore=false'Error as below:[Error] Failures: [Error] <Filename> errror details [INFO] [ERROR] Tests run:2, Failures:2, Errors:0, Skipped:0 [INFO] [INFO]---------------------------------------------- [INFO] BUILD SUCCESS [INFO]----------------------------------------------
Azure devops pipeline job shows success even though the Maven task is failed
To get the currentbranch namein yourpiplinescript you can use the below code.echo "My branch is: ${scm.branches[0]}" node('build_node') { }ShareFollowansweredJan 26, 2021 at 16:40RickyRicky1,44111 gold badge1212 silver badges1212 bronze badgesAdd a comment|
For a project I would like to know which branch is used to check out the pipeline script that is used. In this case I would get back that the branch is */developJob Definition exampleSolved by using this:String url = 'curl --user ' + "${JENKINS_USER}:${JENKINS_PASS}" + ' https://jenkinsurl/job/'+ env.JOB_NAME + '/config.xml' String CICD_BRANCH = new XmlSlurper().parseText(url.execute().text).definition.scm.branches."hudson.plugins.git.BranchSpec".getProperty("name").toString().substring(2); env.CICDBRANCH = CICD_BRANCH.toString()
Access Jenkins pipeline definition within Jenkinsfile
Azure DevOps Server 2019 - Create pipeline - No GitHub yaml option in Where is Your CodeAccording to the official documentSupported source repositories, Azure Devops server 2019 does not supportGitHubas Source Repository type even in Classic editor:So, there is no GitHub yaml option on the Azure DevOps Server 2019. It is currently only supported by Azure devops services.ShareFolloweditedJan 19, 2021 at 1:47answeredJan 19, 2021 at 1:42Leo LiuLeo Liu73.9k1010 gold badges119119 silver badges143143 bronze badges2Thanks, how about azure devops 2020, still not supporting it?–eliassalJan 19, 2021 at 9:14@eliassal, Yes, I verified it on azure devops server 2020 and it is still not supported.–Leo LiuJan 20, 2021 at 1:39Add a comment|
I have Azure Devops server 2019 update 1.1, I understand from a couple of videos that when we try to create a new pipeline, we should see several options as followswhereas on my server, there is no "Githyb Yaml", only enterprise or othergitAm I missing a config or an update? Thanks
Azure DevOps Server 2019 - Create pipeline - No GitHub yaml option in Where is Your Code
There is no limitation in Redis. It was because of myinputsetting in logstash.input { jdbc { ... jdbc_page_size => 10000 jdbc_paging_enabled => true } }ShareFollowansweredJan 6, 2021 at 0:24iooiiooi49322 gold badges1111 silver badges2626 bronze badgesAdd a comment|
I'm using logstash to send data to Redis asoutput { redis { host => ["${REDIS_URL}"] data_type => "list" key => "ID" codec => line { format => "%{id}"} } }When I check data in Redis, it created anIDlist with 10000 records data inDB0. But the real data is much than 10000 records. Is it the limit number in Redis? How to deal with other data in a list of DB with Redis?If 10000 is a limit in Redis, can I use other db likeDB1to save other data?
How to save large data send from logstash to Redis with list type?
I solved this by making use of the$_concept in powershell.foreach($line in Get-Content -Path .\Folders.txt){$scrpt = 'myProgram getData --query "select key from datatable where key =''$line'')" --resultformat=csv |foreach {$line+"/"+$_} >> data.csv'; Invoke-Expression $scrpt ` } `;ShareFolloweditedJan 5, 2021 at 20:56answeredJan 5, 2021 at 19:17Peter NogesPeter Noges27522 silver badges2020 bronze badgesAdd a comment|
In powershell 5.1, myforeachloop appends data frommyProgramtodata.csv, using the append syntax, (>>). The code below successfully appends/writes the result of mygetData --queryto thedata.csvfile.I want/need a small change to the generation ofdata.csv; I want/need the$linevariable and "/" merged into the beginning of each line ofdata.csv. How can this be solved?generation code (data.csv):foreach($line in Get-Content -Path .\Folders.txt){$scrpt = 'myProgram getData --query "select key from datatable where key =''$line'')" --resultformat=csv >> data.csv'; Invoke-Expression $scrpt ` } `;Expected result (data.csv)$line/data $line/data $line/dataActual result (data.csv):data data data
Powershell 5.1 prepending $line variable to the result before appending (>>) to .csv
I know this answer is kind of late but here's a complete example to achieve what you asked forconst Hapi = require('@hapi/hapi') const server = Hapi.server({ port: 8000 }) const success = function (data) { return this.response({ data }) } server.decorate('toolkit', 'success', success) server.route({ method: 'GET', path: '/{name}', handler: function (request, h) { return h.success(request.params.name) } })ShareFollowansweredNov 13, 2021 at 4:35Rico ChenRico Chen2,2701616 silver badges1818 bronze badgesAdd a comment|
I would like to “wrap” or manipulate every response from REST API calls.I want to use a middleware/pipeline, but just in the response flow.Example:{data: everyResponseReturnVal}I am using hapi.js as server side node.js framework.
Hapi.js response middleware
To build anothermultibranchPipelineyou do not need..before its name. So in your case just use:job: "multibranchPipeleB/master"ShareFollowansweredDec 30, 2020 at 8:13ymochuradymochurad99177 silver badges1616 bronze badges4hi thanks for your help, i have change tobuild( job: "ami-ph-android-gcashapp/master", propagate: true, wait: true )i still get errorNo item named ami-ph-android-gcashapp/masterseem like jenkins cannot find the multibranch pipeline–Lê Khánh VinhDec 30, 2020 at 8:301When you open your multibranch pipeline and you branch in Jenkins UI, there should be a field namedFull project name:. Does this name match with the name you are trying to build?–ymochuradDec 30, 2020 at 8:33e.g. as on this screenshot:i.stack.imgur.com/2WAef.png–ymochuradDec 30, 2020 at 8:41thanks, get it to work, my mistake is branch master is never been built that why jenkins can not find. i trigger a build on master from (multibranch piple to create master branch job)–Lê Khánh VinhDec 30, 2020 at 8:44Add a comment|
I have Jenkins setup with 2multibranchpipeline which depend on each other let saymultibranchPipelineAandmultibranchPipelineB. I would like a job frommultibranchPipelineAto build specific branch inmultibranchPipelineAand wait the build to finishI have tried use below frommultibranchPipeleAJenkinfilestage('Build MiniApp Libs') { steps { build( job: "../multibranchPipeleB/master", propagate: true, wait: true ) } }But always receiveNo item named ../multibranchPipeleB/masterfound.If I use single pipeline, let's saypipelineB, then the below work../pipelineBHow can I build specific branchmultibranchPipelinefrom anothermultibranchPipelinejobs? and wait the build to finish?
Jenkins build multibranch pipeline from another multibranch pipeline
Move it to the level of pipelinedef res = bat script:'call compile.bat', returnStatus:true if( res!= 0 ) env.BuildResult = 'failure'ShareFollowansweredDec 21, 2020 at 2:39daggettdaggett27.3k33 gold badges4343 silver badges5757 bronze badgesAdd a comment|
I'd like to set the env variable based on the execute result of bat. Whencompile.batreturn1, how to setenv.BuildResultasFAILURE?node("test") { env.BuildResult='SUCCESS' stage('Compile') { bat''' call compile.bat if %ERRORLEVEL% NEQ 0 SET BuildResult='FAILURE' ''' } stage('Post') { bat''' echo %BuildResult% '''' } }
How to set environment variable by bat in Jenkins pipeline
-1rules = {'ness': '', 'ational': 'ate', 'ing': '', 'sses': 'ss'}def stemx(inp:str): for x in rules: if inp[len(inp) - len(x):] == x: return inp[0:len(inp) - len(x)] + rules[x] return inpprint(stemx('singfds'))ShareFollowansweredMar 2, 2023 at 11:02anonanon16Add a comment|
I have a list of words and a list of stem rules. I need to stem the words that their sufixes are in the stem rules list.I got a hint from a friend that i can use pipeline methodsFor example if i have : stem=['less','ship','ing','les','ly','es','s'] text=['friends','friendly','keeping','friendship']i should get :'friend','friend','keep',friend'
I need to perform a Stemming operation in python ,without nltk . Using pipeline methods
passing_lineto rl.on('pause',function(_line){} hide the global_linethat's why it's giving undefined and cmd command is fine.there is other way to do this by usingprocess I/Oof node.jsfunction YourData(input) { } process.stdin.resume(); process.stdin.setEncoding("ascii"); _input = ""; process.stdin.on("data", function (input) { _input += input; }); process.stdin.on("end", function () { YourData(_input); });read more aboutreadlineandprocess I/OShareFolloweditedDec 7, 2020 at 16:50answeredDec 7, 2020 at 16:45SidProSidPro42811 gold badge55 silver badges1616 bronze badgesAdd a comment|
I want to run name.js file from the command prompt usingnode.jsand pass the input file and redirect that output in output.txt, I am writing a command for this isnode name.js < input.txt | > output.txtbut this is not working or I am wrong.name.js look like this:const readline = require('readline'); const rl = readline.createInterface({ input: process.stdin, output: process.stdout }); var _line = ""; rl.on('line', function(line){ _line += line; }); rl.on('pause', function(_line){ console.log(_line); });I have also tried this inPowershell -Command "command"EDIT: for example input.txt containhello js hello node hello world!now,if i runnode name.js < input.txt > output.txt. i just get return value ofconsole.log()'s"undefined" in output.txt
node.js: take input from file using readline and process I/O in windows command prompt
##[error]OAuth token not found. Make sure to have 'Allow Scripts to Access OAuth Token' enabled in the build definition.For this issue, check the option shown in the figure below in the agent jobFor yaml pipeline, at script / task level for example PowerShell:- powershell: ./build.ps1 env: system_accesstoken: $(System.AccessToken)Here is theticketyou can refer to.Update:- task: GitCopyDiff@1 inputs: destination: '$(Build.ArtifactStagingDirectory)/diff' changeType: 'M' env: SYSTEM_ACCESSTOKEN: $(system.accesstoken)ShareFolloweditedNov 27, 2020 at 7:36answeredNov 26, 2020 at 9:19Hugh LinHugh Lin18.5k22 gold badges2424 silver badges2929 bronze badges4I have tried below still getting same issue trigger: - master variables: system_accesstoken: $(System.AccessToken) - task: GitCopyDiff@1 inputs: destination: '$(Build.ArtifactStagingDirectory)/diff' changeType: 'M'–priyaNov 26, 2020 at 9:39How aboutthis:env: SYSTEM_ACCESSTOKEN: $(System.AccessToken)In addition, checkSystem.AccessToken'sscope.–Hugh LinNov 26, 2020 at 9:47I am not able to declare the variable in the gitcopydiff task...this variable as per my understanding should be given in powershell/ bash task...so am not able to get it how can I make the task use the systemaccess token.–priyaNov 26, 2020 at 10:25Thanq Lin it works...could u suggest me for this question if possible?stackoverflow.com/q/65032549/13460189–priyaNov 27, 2020 at 8:49Add a comment|
My build fails with below error. I want to get this option enabled in pipeline. Could someone help me with how can I do this? I can select this option in additional options section in release pipeline. But not sure on how to do it in build pipeline.##[error]OAuth token not found. Make sure to have 'Allow Scripts to Access OAuth Token' enabled in the build definition.
how to enable Ouath authentication in build pipeline of Azure DevOps
The__init__method for python classes is a method that runs when you dopipe = ItemPipeline(). It isn't mandatory - only really needed when you want to run some code at this step. You don't need one for your current use case of reusing the ItemPipeline.For the rest of your function - you don't need to make ITEM an attribute of the spider but you need to change the spider arguments to make them optional. So here would be the code you need.class ItemPipeline: def process_item(self, item, spider=None): do a bunch of important stuff def open_spider(self, spider=None): initialize db def close_spider(self, spider=None): close db def functional_pipeline(argument1 =1, argument2 =False, argument3 ="Manual"): ITEM = {argument1: argument1, argument2: argument2, argument3: argument3} pipe = ItemPipeline() pipe.open_spider() pipe.process_item(ITEM) pipe.close_spider()ShareFollowansweredNov 22, 2020 at 3:59Further ReadingFurther Reading24333 silver badges88 bronze badgesAdd a comment|
In Scrapy, I have a pipeline that does a whole bunch of work for each item it scrapes, I'm looking to create a function that can be used to create an instance of the pipeline manually and pass in the information I need to.I have 0 experience with OOP and have never used classes before. All of the resources I've looked at include aninitmethod that would help me understand, but I'm just lost.Below is a snippet of what I want to attempt, but I really don't want to break anything.It would also seem that this is redundant, but again, I don't know what I'm doing. Since I need to be able to run this from a few different places, I think this makes the most sense, but any input would be appreciated.class ItemPipeline: def process_item(self, item, spider): do a bunch of important stuff def open_spider(self, spider): initialize db def close_spider(self, spider): close db def functional_pipeline(argument1 =1, argument2 =False, argument3 ="Manual"): ITEM = {argument1: argument1, argument2: argument2, argument3: argument3} pipe = ItemPipeline() pipe.item = ITEM pipe.open_spider() pipe.process_item(item=pipe.item) pipe.close_spider()
Scrapy - Create a function pipeline for manual imports pythjon
Solved it in the end using tomcat connection (jndi) which can be specified in tomcat so then problem goes awayShareFollowansweredDec 29, 2020 at 21:54David TownsendDavid Townsend1Add a comment|
In .Net this is easy - we have a file transformation task. I am trying to do same in Azure dev ops release pipeline with Java but because it’s a war file I not sure how to proceed. I need to set the dB connection for TEST server and changed for PRODUCTION server.I build war file using gradle in azure build pipeline (no problems here)I then can in azure dev ops create a release pipeline that picks up the war and will copy it to tomcat/webapps to installMy problem is I have a settings file with DB connection in. That changes depending upon what server it’s going on. I want to use Azure file transformation task but problem I have is all I have is a .war file.How could I have one build and then change the dB connection/settings file depending upon server it’s being deployed to in the azure release pipeline.
How to change settings in a Java war file on Azure release pipeline for tomcat
According to your description, I tested this problem. Please check the following two cases:Make sure your changes are saved and run. If your changes are not saved, the system will not recognize them.If you change the branch of pipeline B individually, click save, but not run, then this pipeline will be triggered, but the branch will not change. If we change pipeline B and click run, the branch of the triggered pipeline B will work as expected.For a workaround, please make sure that the trigger branch of your pipeline B changes, and click save and run after the change. This will work as expected.ShareFollowansweredNov 20, 2020 at 10:09Alina Wang-MSFTAlina Wang-MSFT53233 silver badges44 bronze badges1I did save and ran my changes but it never picks up the changes.. This case is also cross-project, so maybe that isn't working proper yet.–AnneNov 23, 2020 at 11:20Add a comment|
I'm trying to trigger the build of pipeline B based on the build of pipeline A. This works, and I tested this on a feature branch before changing it to master. However, when I change the branch to master, it does NOT trigger on master, but still triggers on the feature branch.I added the following to trigger pipeline B:resources: pipelines: - pipeline: test1 source: test1 branch: master project: project1 trigger: branches: - master stages: - buildPackageMasterPreviously, for testing purposes it was configured for a feature branch, as follows:resources: pipelines: - pipeline: test1 source: test1 branch: feature/DATA-24843 project: project1 trigger: branches: - feature/DATA-24843 stages: - buildPackageNonMasterBut it doesn't pick up the change I made. So it still triggers on a change in feature/DATA-24843, but not from master. Any ideas on why this happens and how I can fix it?
Changing the AzureDevOps trigger doesn't work (from build to build pipeline)
You can establishCI/CD for API Management using Azure Resource Manager templates.After API developers have finished developing and testing an API, and have generated the API templates, they can submit a pull request to merge the changes to the publisher repository.API publishers can validate the pull request and make sure the changes are safe and compliant. For example, they can check if only HTTPS is allowed to communicate with the API. Most validations can be automated as a step in the CI/CD pipeline.Once the changes are approved and merged successfully, API publishers can choose to deploy them to the Production instance either on schedule or on demand. The deployment of the templates can be automated usingGitHub Actions,Azure Pipelines,Azure PowerShell,Azure CLI, or other tools.Azure API Management DevOps Resource Kitis a great place to start with.ShareFollowansweredNov 19, 2020 at 7:02krishgkrishg6,25522 gold badges1313 silver badges1919 bronze badgesAdd a comment|
Inazure api-->Inbound Processing-->PoliciesSome of our developers changing these policies and we have no LOG or approval (PR) mechanism. We want to a Pull request or a similar mechanism to that area. If any developer changes policies, It should be approved before going live.Can you give me some Keywords or information if possible. I don't know how to search for it.
approve mechanism azure apim policies
I found the answer simply by playing around with theCI_PIPELINE_SOURCEin the downstream pipeline. If $CI_PIPELINE_SOURCE == "pipeline", don't execute this task.ShareFollowansweredNov 18, 2020 at 16:35DeepBlueMusselDeepBlueMussel111 bronze badgeAdd a comment|
On Gitlab-CI, I have two projects that work together. Sometime only project A is updated, sometime only B and sometime A+B (when the releases are linked). I would like to create a pipeline on each project that launches one project and trigger the other only if necessary. I used downstream trigger but I'm facing two situations that are in conflict:When I only push on A on branch release-*, I want to trigger master on B and vice versaWhen I push on A and B because the release is linked to the two projects, it will trigger pipeline A then B, then B will trigger A, then A will trigger B, and this forever. (I didn't test this case but in theory that will happen)Any ideas how to solve this situation?Using chatops with Slack is a solution that can be considered.
Multi-project pipelines that can launch each other
NO, you cannot put SCAN into pipeline. Because the 2nd scan depends on the 1st scan result, and you have to get the reply of first scan before you can send the second scan command.In order to improve performance, you have two options:Put these GET commands into a pipelineUse MGET command to get multiple keys.ShareFollowansweredNov 19, 2020 at 0:37for_stackfor_stack21.7k44 gold badges3636 silver badges5050 bronze badges1yes, I use pipeline for GET, but scan causes performance problems.–user3930618Nov 19, 2020 at 8:09Add a comment|
I have a construct like this:while { keys, cursor, err = redis.scan(cursor, "bla:*", 100) for keys { res = redis.get(keys[i]) .... } .... }Is it possible to put the scan and get commands into pipeline? if yes, how can I do it? i have some problems with performance.
Redis pipeline with scan and get
There is not a way of "changing data type" without refreshing the whole table. You can run a SQL likeCREATE TEMP FUNCTION myFunctionStringToFloat(x STRING) AS ( -- Assuming you have non-trivial logic to safely convert STRING to FLOAT -- If you don't, you can just put SAFE_CAST(x AS FLOAT) ); CREATE OR REPLACE myTable AS SELECT * EXCEPT(col1), myFunctionStringToFloat(col1) as col1 FROM myTable;You will be charged the scanning of the table though. The other way is to keep your CSV super clean and make sure the table load succeed with FLOAT column.ShareFollowansweredNov 3, 2020 at 18:43Yun ZhangYun Zhang5,34522 gold badges1111 silver badges2929 bronze badges1Thanks for your response ! Will try this and keep you posted.–NKSNov 10, 2020 at 14:46Add a comment|
We are working on a bigdata pipeline automation on GCP and are ingesting some CSV files. To prevent process break at BQ level due to schema we have ingested the first table after converting all columns as 'STRING' type.Is it gracefully possible in BQ to have the schema conversion on the table just ingested , so that we can change the STRING types to their actual types like INT64, FLOAT , etc.Is it a good approach?
Schema conversion of a BQ table - Change of columns data type
I will recommend you to have a look onhttps://wiki.acumos.org/I found this page in which there is a video related to MLWB pipeline :https://wiki.acumos.org/display/REL/Boreas+DemosI found this alsohttps://wiki.acumos.org/display/REL/Clio+Demos?preview=/20546727/26640880/Workbench%20ACUMOS-3251-3465%20-%201001.mp4Perhaps it exists other demo or videosShareFollowansweredOct 28, 2020 at 8:10PhilPhil31411 gold badge22 silver badges1313 bronze badges1issue persists, I thought the url will be auto generated/assigned during pipeline creation–sandejaiNov 11, 2020 at 12:20Add a comment|
I have installed acumos clio release. I am able to onboard sample model, create acu-compose but failing to create pipelines.I understand there are few ways to create pipeline.Design studio --> Workbench --> Pipeline --> create pipelineHas no url text box and when created it throws "Server error", logs suggests , "Malformed URL"[PFA]via Home --> Design Studio --> ML Workbench–>Projects --> --> create data pipeline , It has URL box, but not sure, what value of URL to input ?References few Jira :https://jira.acumos.org/browse/ACUMOS-4018
Acumos AI clio : Can not create pipeline
I think your idea to stage data in S3, if acceptable in your specific use-case, is a good baseline design:SageMaker smoothly connects to S3 (via Batch Transform or Processing job)Redshift COPY statements are best practice for efficient loading of data, and can be done from S3 ("COPY loads large amounts of data much more efficiently than using INSERT statements, and stores the data more effectively as well."-Redshift documentation)ShareFollowansweredOct 28, 2020 at 9:57Olivier CruchantOlivier Cruchant3,8571616 silver badges1919 bronze badges1Yes, Thats what I had in mind too. But thought there might be a better design for this.–Pratibha UROct 28, 2020 at 16:58Add a comment|
I wanted to check with the community here if they have explore the pipeline option from Sagemaker to Redshift directly. I want to load the predicted data from Sagemaker to a table in Redshift. I was planning to do it via S3, but was wondering if there are better ways to do this.
Datapipeline from Sagemaker to Redshift
I'm a SubGit tool support engineer, I would be glad to help you to resolve that issue, but it looks complex and it needs an investigation to find the cause and a solution. However, we would need to check SubGit logs to find the cause, so I suggest to open a ticket on our support forum support.tmatesoft.com and upload all SubGit logs from the affected repository. Alternatively, you can send an email with logs to[email protected].ShareFollowansweredOct 26, 2020 at 12:22ildar.hmildar.hm52622 silver badges55 bronze badges0Add a comment|
We are using Subgit to create a one-way mirror from Subversion > Gitlab. This is working perfectly fine. We also have a few branches in this repository and we are able to manually trigger a gitlab pipeline from any of these branches.When we create a new branch in Subversion it's translated without problem. However when we try to manually trigger a gitlab pipeline from that newly made branch it doesn't appear in the branch list. If we try it using the Gitlab API we get the following response:{"message":{"base":["Reference not found"]}}The new branch is a fork of the mainline branch and it contains a .gitlab-ci.yml file. As far a gitlab-ci/the gitlab runner is concerned the branch doesn't exist. However in the repository I can see it and it updates on a new commit just fine. I'm quite lost since as far as I know if there is a branch, gitlab should be able to launch a pipeline from that branch.Hopefully someone can point me in the right direction. Any ideas on why this is are welcome.
Cannot trigger Gitlab-CI pipeline from SVN > Git translated branch [Subgit]
Appears the answer is that once the stream is piped to the response, and bytes are flowing to the client, the partial streamed response is already on the client and can't spontaneously be switched over to a JSON error response.It is possible to log a stream error on the server, but there's nothing more to do for the client in this case except catch an error in theBody.json()response parsing and note that the stream failed.ShareFollowansweredOct 16, 2020 at 16:12ericsocoericsoco25.5k3030 gold badges100100 silver badges127127 bronze badgesAdd a comment|
I can't figure out how to pipe stream errors to the client in a Koa app. I have:try { const res = await request.request({...streamingRequestConfig}); ctx.body = new Passthrough(); // node stream.Passthrough, per https://github.com/koajs/koa/blob/master/docs/api/response.md#stream pipeline(res.data, ctx.body, err => { ctx.body = {streamError: err} }); // (1) This error appears in pipeline() error handler, // but client receives only a 500 with no parseable body // res.data.emit('error', new Error('Test stream error'); // (2) This error is caught below and returned to client // in response body, parseable with Body.json() // throw new Error('Test request error'); } catch (err) { ctx.body = {requestError: err} }and in my client:const res = await fetch(url); try { const data = await res.json(); const error = data.streamError || data.requestError; if (error) { // Error thrown from (2) is caught here displayErrorToUser(error); } } catch (err) { // Error thrown from (1) is caught here, but not parseable; // it is a generic 500 console.error('Parse error') }As described in the comments, throwing at (2) generates a response body with an error that can be parsed on the client, but a stream error (as emulated at (1)) responds with a generic 500: Internal Server Error.How can I pass the stream error to the client so I can display + log it?
How to forward error to client in koa stream?
Inread.pipe(write)writeis an argument to.pipe(). It will start streaming the data of the file open withcreateReadStream()to the file specified increateWriteStream().Doing this withpipelinewill produce the same effect, but this one automatically handlers errors, and provides you with a callback to let you know when the stream finishes.If you want to know what is transferred, then you can look at thestream documentationin Node.JS. To make it simple, it will readBuffersof 16kb (or 64kb I don't remember well now) from the source file and write them to the destination file. It will start automatically as soon as it can.ShareFollowansweredOct 9, 2020 at 7:31LucianoLuciano8,58255 gold badges3434 silver badges5757 bronze badgesAdd a comment|
I have this code in my js file:const read = fs.createReadStream ... const write = fs.createWriteStream...const { pipeline } = require('stream') ... pipeline( read, write, (error)=>{} )and i tried like this:read.pipe(write) //what arguments does 'write' get?in both different cases I cant check what arguments are their after stream startsThank you!
How to get know what arguments does pipe transfers in node js?
It is working fine for me, refer to access on storage account and Synapse indocument. Below is GIF showing itShareFollowansweredNov 4, 2020 at 9:50HarithaMaddi-MSFTHarithaMaddi-MSFT54122 silver badges55 bronze badgesAdd a comment|
I have created an event trigger on a Synapse pipeline. When I publish the pipeline, I get an error:Forbidden. Role based access check failed for resource /subscriptions/..../resourceGroups/.../providers/Microsoft.Storage/storageAccounts/...Has anyone managed to trigger a pipeline in Synapse using an event please? I have no problem doing this in Azure Data Factory.
Azure synapse pipeline event trigger is failing to publish
Once you have fit yourmodelthevocabulary_parameter appears. You can access it withmodel['tfidfvectorizer'].vocabulary_which returns a dictionary containing all the tokens and their counts.ShareFollowansweredOct 5, 2020 at 20:13KeithKeith1,81733 gold badges1717 silver badges2121 bronze badgesAdd a comment|
My pipeline looks likemodel = make_pipeline( TfidfVectorizer(tokenizer=tokenize, min_df=5), MultiOutputClassifier( estimator=AdaBoostClassifier( base_estimator=DecisionTreeClassifier(max_depth=2), n_estimators=10, learning_rate=1)))I want to get the dictionary assembled byTfidfVectorizer. Is that possible?
Is it possible to access the vocabulary list from the nltk vectorizer in an NLP ML pipeline?
You can usefind filesfor that.def tests = findFiles glob: "*.dll", excludes: <whatever you dont want>ShareFollowansweredSep 28, 2020 at 9:39smelmsmelm1,29377 silver badges1414 bronze badges3thanks can you show me how can include this in the script please–JamesAnderson_154Sep 28, 2020 at 16:45you replace the `def tests = [ ... ] with it. If you want all dlls in the list you can omit the excludes part, otherwise put a pattern you want to exclude there. You have to install the pipeline utility plugin, if you haven't done so already.–smelmSep 29, 2020 at 8:09If your dlls are in subfolders you can use glob: "*/.dll" instead–smelmSep 29, 2020 at 8:10Add a comment|
Currently in my pipeline script I have 4 dlls that I am executing in Jenkins. 3 of the dll starts with the word Service and one of them starts with the word unit Can I use wildcard here to pick out the once that starts with Search like Search% or $Search and then add a finally statement to execute the one that starts wit h unit..I keep getting error when I use % or $ to mention the project file it gives me and error "Project file does not exist"stage('Tests') { steps { script { def tests = ['dotnet test %WORKSPACE%/ServiceT1.dll', 'dotnet test %WORKSPACE%/Service-JV.dll', 'dotnet test %WORKSPACE%/Service-or.dll', 'dotnet test %WORKSPACE%/unit-pr.dll'] tests.each{test -> try{ bat 'dotnet test %WORKSPACE%/$Service' }catch(e){ echo e.toString() } } } }
How to get not hardcode project path in Jenkins and mention the path in the Jenkins pipeline script
so basically found issue was happening due to not a proper down merge and upmerge was done and somehow reckon gone heywire, after doing proper downmerge and upmerge things are back and reckon is creating right tag.ShareFollowansweredSep 25, 2020 at 14:48skm_satishskm_satish46544 silver badges1212 bronze badgesAdd a comment|
I am new to devops, so sorry if my question sounds stupid. We are using gitlab for our pipeline purpose and using rackon for creating tag. On our dev branch , rackon was creating tags with 1.4.0-beta.x and was working fine, after last commit now the build starts failing with error message:Reckoned version 0.1.0-beta.1 is (and cannot be) less than base version 1.4.0-beta.23Last successful tag was 1.4.0-beta.23 , For build stage we are using gradle with rackon to create tag and then pushing that tag with normal git push command.- ./gradlew clean -Preckon.stage=$stage -Preckon.scope=$scope - ./gradlew docker -x test reckonTagCreate -Preckon.stage=$stage -Preckon.scope=$scope - git push --tagAny suggestions what is wrong after last commit done on dev branch after user2dev merge. Any help appreciated. thanks.
Reckoned version 0.1.0-beta.1 is (and cannot be) less than base version 1.4.0-beta.x
To modify the value of flag_log simply run:pipeline.set_params(my_transformer__flag_log=False)That should work and works for me. Otherwise if you are sure, that it does not work that way, please provide aminimal, reproducible exampleof your code, so that others can reproduce your issue (I'll look into it and update my answer then).ShareFollowansweredSep 24, 2020 at 7:30Kim TangKim Tang2,39022 gold badges1010 silver badges3535 bronze badgesAdd a comment|
I have aPipelinethat I've trained and saved usingpickle. It contains the following steps:Pipeline(steps=[('preprocessing': Preprocessor()), ('my_transformer': my_transformer()), ('model': XGBClassifier()) ])I want to log some informations whenmy_transformer()is executed, but only when I predict probabilities, i.e when I runpipeline.predict_proba(). I do not want the logging line to be executed when I runpipeline.predict().Here is whatmy_transformer()looks like this:class my_transformer(BaseEstimator, TransformerMixin): def __init__(self, flag_log=False): self.flag_log = flag_log def transform(self, features): #apply transformations if self.flag_log: logger.info("log probabilities")What I want to do is to modify the value offlag_logbased on if I want to log information or not. Basically, I want to have something like this:pipeline.set_params(my_transformer__flag_log=True) probabilities = pipeline.predict_proba(features) pipeline.set_params(my_transformer__flag_log=False) predictions = pipeline.predict(features)I tried the code above, but It does not work, the value offlag_logdoes not change. Is there some other solution to do so ?
Change a parameter of a sklearn pipeline
What you are describing is exactly how Luigi is set up to run. When you specify which tasks to run (inluigi.buildor the cli), you are specifying which tasks need to be completed for Luigi to consider its job done. Luigi will not care about your pipeline unless you tell it to. One way to do this would be to inform Luigi of all tasks you do care about, which in this case seem to be TaskB and Task D. So, it would look something like:luigi.build([TaskD(...), TaskB(...)])ShareFollowansweredSep 24, 2020 at 17:47iHowelliHowell2,33311 gold badge2626 silver badges4949 bronze badgesAdd a comment|
I have a pipeline built with Luigi in which some taks require other tasks and each task creates a file. Something like:TaskA-------->TaskB---------> TaskC-------->TaskD (fileA) (fileB) (fileC) (fileD)The first time I run the pipeline, everything runs well and gets created.If I run the pipeline again, nothing gets run since TaskD was completed already.If I manually delete fileB (made by TaskB), I expected it to be recreated and everything else running but the pipeline fails.yes, fileB gets recreated but TaskC fails, with the error message saying that fileC already exist.Is there a way for the subsequent files to be recreated again? Or fileC be overwritten?
A luigi task fails if the next file is already completed
Answer to self:For unknown reasons the error appears in cases where the string value in another column exceeds the allocated varchar length. This affects the decimal type column in such way the Azure pipeline terminates. If someone knows some more details that rases this error please leave a comment.ShareFollowansweredSep 14, 2020 at 11:05JorritDJJorritDJ1Add a comment|
I'm facing difficulties solving the error message below received from the Azure datafactory v2 while tying to run a pipeline to copy a csv to a SQL table{ "errorCode": "2200", "message": "ErrorCode=DataTypeNotSupported,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=The data type SqlDecimal is not supported.,Source=,'", "failureType": "UserError", "target": "Copy Invervence Blob to SQL from csv", "details": [] }What I've tried so far is changing the original .xslx file to a .CSV to eliminate possible formatting issues. In the mapping settings, when I delete the 'BEDRAG_2020' column from the copy task in the pipeline works fine, see snippet below.Snippet of data previewCan someone help me troubleshooting this error?
How can I handle Message=The data type SqlDecimal is not supported. in Azure DWH v2
You need to include thetokenizeprocessor and include the propertytokenize_pretokenizedset toTrue. This will assume the text is tokenized on whitespace and sentence split by newline. You can also past a list of lists of strings, each list representing a sentence, and the entries being the tokens.This is explained here:https://stanfordnlp.github.io/stanza/tokenize.htmlShareFollowansweredAug 31, 2020 at 22:30StanfordNLPHelpStanfordNLPHelp8,70911 gold badge1111 silver badges99 bronze badgesAdd a comment|
I have a tokenized file and I would like to use StanfordNLP to annotate it with POS and dependency parsing tags. I am using a Python script with the following configuration:config = { 'processors': 'pos,lemma,depparse', 'lang': 'de', 'pos_model_path': './de_gsd_models/de_gsd_tagger.pt', 'pos_pretrain_path': './de_gsd_models/de_gsd.pretrain.pt', 'lemma_model_path': './de_gsd_models/de_gsd_lemmatizer.pt', 'depparse_model_path': './de_gsd_models/de_gsd_parser.pt', 'depparse_pretrain_path': './de_gsd_models/de_gsd.pretrain.pt}'nlp = stanfordnlp.Pipeline(**config)doc = nlp(text)However, I receive the following message:missing: {'tokenize'} The processors list provided for this pipeline is invalid. Please make sure all prerequisites are met for every processor.Is it possible to skip the tokenization step using a Python script?Thanks in advance!
How can I use StanfordNLP tools (POSTagger and Parser) with an already Tokenized file?
It looks as if this is a limitation of the tool currently - given it is only in preview, this will likely be resolved going forwards, however at this time it does not appear as if the functionality to strip carriage returns is workingShareFollowansweredSep 2, 2020 at 8:48Churchy36Churchy361Add a comment|
I'm currently trying to build an ADF pipeline using the new Data Wrangling Data Flow, which is effectively the Power Query element of PowerBI as far as I can see (I'm more of a PBI developer!).In the data flow, I am picking up a CSV file from an SFTP location and using the wrangle to transform the data and load into a SQL server database.I am successfully picking up the file and loading it into a table, however the CSV contains carriage returns within the cells, which cause additional lines to be inserted into my table.Using the wrangling data flow, I have added a step that removes the carriage return. I can visibly see the change has been applied in the post steps:Pre Change:Example of pre changePost Change:Example of post changeHowever, when I pass the data wrangling step into my pipeline, it seems to load the data ignoring the step to remove the #(CR)#(LF) - i.e. the carriage return inserts as new lines into my table.Example of Data Inserted to TableSo I guess my question here is does anyone have any experience of using a Data Wrangling data flow to strip out carriage returns and if so can they give me a bit of guidance as to how they made it work? As far as I can see, the carriage returns are taken into account before it goes through the data wrangle - which kinda defeats the objective of using it!ThanksNick
Stripping Carriage Returns in ADF Data Wrangle not working
If you want to run as a post script you have to add the post block after each stage. But within that you can always call a method, saving some code repetition.Another option is to call the post script method (runPostScript()) directly as the last step of the stage instead of adding a post script. But the caveat is that it may not execute all the time. If something within the stage fails, the method will not get executed.pipeline { agent { label '!master' } stages { stage("Checkout Test") { steps { 'Do somthing' } } stage('Test1') { steps { dir('test') { 'Do something' } } } post { *always { script { runPostScript() } }* } stage('Test2') { steps { dir('test') { 'Do something' } } } post { *always { script { runPostScript() } }* } } } def runPostScript(){ 'do something' }ShareFollowansweredAug 27, 2020 at 15:08Isaiah4110Isaiah41109,97522 gold badges4343 silver badges5757 bronze badgesAdd a comment|
I need to run 'always' block which has a script, after every stage. So, what I have done is shown below. My question is, is there anyway I can have 'always' block just once and then call after each stage rather than supplying the whole 'always script' after each stage?pipeline { agent { label '!master' } stages { stage("Checkout Test") { steps { 'Do somthing' } } stage('Test1') { steps { dir('test') { 'Do something' } } } post { *always { script { dir('test') { Uploader( 'Do something' ) } } }* } stage('Test2') { steps { dir('test') { 'Do something' } } } post { *always { script { dir('test') { Uploader( 'Do something' ) } } }* } }}
'Always' block after every stage in declarative Jenkin's pipeline job
The Spinnaker webhook stage doesn't really support any authentication methods out of the box. However, you can configure any headers you want, so you could optionally add one or more authentication headers there. If you use the custom webhook stage (where you configure the webhook inorca-local.yml), these custom headers will not be visible through the API or UI,but they will be visible in the execution context, so this should't actually be used for secrets. You can follow this issue for more information:https://github.com/spinnaker/spinnaker/issues/2787ShareFollowansweredAug 31, 2020 at 19:48jervijervi16011 silver badge88 bronze badgesAdd a comment|
Im Kind of new to GCP world, and trying to configure to hit an endpoint using the spinnaker - webhook as stage.webURL :https://Testsite.com/s/subconnection/invoke?schedulerName=INV,method :POST,payload:{},content:application/jsonError:Webhook failed: Error submitting webhook for pipeline 01EG6RQKB893ZkkkWPEQDDXH to https://Testsite.com/s/subconnection/invoke?schedulerName=INVis returning status code 403, Do I need to add spinnaker user in my application rbac rules? pls helpAppreciate your help on this regard.Thanks Yugi
spinnaker | webhook as stage
The following answers to the question and to the last comment to the question, where the OP asks for the row numbers of the outliers.what if we want to return the row numbers that go withboxplot.stats()$outfrom the pipe? so if we didb<-data%>%filter(group=='b')outside of the pipe, we could have used:which(b$value %in% boxplot.stats(b$value)$out)This is done byleft_joining with the original data.library(dplyr) set.seed(1234) data <- data.frame(group = rep(c('a','b'), each= 100), value = rnorm(200)) data %>% filter(group == 'b') %>% pull(value) %>% boxplot.stats() %>% '[['('out') %>% data.frame() %>% left_join(data, by = c('.' = 'value')) # . group #1 3.043766 b #2 -2.732220 b #3 -2.855759 bShareFollowansweredAug 17, 2020 at 18:23Rui BarradasRui Barradas73.2k88 gold badges3737 silver badges6868 bronze badgesAdd a comment|
Given a data frame likedata:data <- data.frame(group = rep(c('a','b'), each= 100), value = rnorm(200))We want to filter values forgroup == busingdplyrand useboxplot.statsto identify outliers:library(dplyr) data%>% filter(group == 'b')%>% summarise(out.stats = boxplot.stats(value))This returns the errorColumnout.statsmust be length 1 (a summary value), not 4, why does this not work? How do you apply functions like this inside a pipe?
Applying functions in dplyr pipes
Nevermind guys, I'm dumb the release trigger needs to be set here:and I didnt do it.ShareFolloweditedAug 17, 2020 at 0:52answeredAug 17, 2020 at 0:39Rod RamírezRod Ramírez1,23811 gold badge1111 silver badges2525 bronze badges3Great to see your issue is solved. PleaseAccept it as an Answeronce you can, this can be beneficial to other community members reading this thread.–Cece Dong - MSFTAug 17, 2020 at 2:35Hi @CeceDong-MSFT, thanks for the heads up!, will do.–Rod RamírezAug 17, 2020 at 2:431PleaseAccept it as an Answer, this can be beneficial to other community members reading this thread.–Cece Dong - MSFTAug 20, 2020 at 9:23Add a comment|
I have a solution where I want to implement the whole CI/CD circuit, Its a simple Web API in dotnet core 3.1 When it finished, my pipeline build triggers the release deployment, but somehow it stays idle until I manually select"deploy", and I want that step to do it on automatic, if the build passes the checks on the pipeline then, it should trigger an automatic release to my environment.
Azure Devops: Continue deployment not being fire when pipeline generates a new artifact
NEW_VERSION=$(echo "${GITHUB_REF}" | cut -d "/" -f3) | cut -c 2- | cut -c 1-13maybe?ShareFollowansweredAug 17, 2020 at 19:55unsafe_where_trueunsafe_where_true6,1261515 gold badges6262 silver badges119119 bronze badgesAdd a comment|
I would like to tag my release with v as prefix and product type as a suffix. E.g. Initial releasev1.0.0-alpha01-internalorv1.0.0-alpha01-externalNow I am running a GitHub action workflow to publish a release.# The GITHUB_REF tag comes in the format 'refs/tags/xxx'. # So if we split on '/' and take the 3rd value, we can get the release name. run: | NEW_VERSION=$(echo "${GITHUB_REF}" | cut -d "/" -f3) echo "New version: ${NEW_VERSION}"with the above code snippet, I get my new versionv1.0.0-alpha01-internalorv1.0.0-alpha01-externalnow I don't want my version to be the same as TAG so I would like to cut v from start and -internal or -external from the end of the release TAG.The expectation of a new version would be1.0.0-alpha01
format TAG release ref for Github Workflow Action
You can generate the desired boundaries programmatically and use $bucket.$sort input to achieve the desired ordering (most retweeted first).$bucket to split the collection by day.$push to move each document under the respective day.$project with $slice to get the top X results.Ruby sample using time, count and message fields:require 'mongo' Mongo::Logger.logger.level = Logger::WARN client = Mongo::Client.new(['localhost:14420']) c = client['foo'] c.delete_many 10.times do |i| day_time = Time.now - i*86400 100.times do |j| time = day_time + j*100 count = rand*1000 message = "message #{count}" c.insert_one(time: time, count: count, message: message) end end days = (-1..10).map { |i| Time.now - i*86400 }.reverse pp c.aggregate([ {'$sort' => {count: -1}}, {'$bucket' => {groupBy: '$time', boundaries: days, output: {messages: {'$push' => '$$ROOT'}}, }}, {'$project' => {top_messages: {'$slice' => ['$messages', 5]}}}, ]).to_aShareFollowansweredAug 12, 2020 at 19:37D. SMD. SM14k33 gold badges1212 silver badges2121 bronze badgesAdd a comment|
I am using pymongo to make some analytics on MongoDB.In MongoDB there are 480000 json objects representing tweets made between March and April 2020 about Covid-19 virus.In particular, these objects contain two fields:1)"created_at", which represents the tweet's creation timestamp (for example created_at : 2020-03-20T10:57:57.000+00:00 ) of type date;2)"retweet_count", which represents how many times the tweet is retweeted (for example "retweet_count:30");I would make an aggregation pipeline which takes, for each day, the first 5000 json objects with the highest value of retweet_count.The problem is that i don't undestand neither if i have to use a group clause, a match clause or a project clause (i am a newbie).Here there is a try that i've done:import pymongo from datetime import datetime, tzinfo, timezone from pymongo import MongoClient client['Covid19']['tweets'].aggregate([ { '$match' : { "created_at": { '$gte': datetime(2020, 3, 20), '$lt': datetime(2020, 3, 21) } } }, { '$merge': { 'into': 'tweets_filtered' } } ]) print(client['Covid19']['tweets_filtered'].count_documents({}))This pipeline provides as result the tweets made from 20 March to 21 March, but i would generalize the process and take the first 5000 tweets for each day with the highest value of retweet_count.
Bucketing top N results with an aggregation pipeline in MongoDB
The solution for this problem is the folder name in which these jobs are located. So, the name of the job istest/A/master.ShareFollowansweredAug 12, 2020 at 6:49Kérdezösködő IndiánKérdezösködő Indián60211 gold badge88 silver badges1919 bronze badgesAdd a comment|
I have two Jenkins pipelines, namedAandB. Both use the same docker container, namedC1. The jobAlooks like this:pipeline { agent { node { label 'c1' } } stages { ... } post { always { script { echo "always" } } success { script { echo "success" } } failure { script { echo "failure" } } unstable { script { echo "unstable" } } } }The only difference is in the jobB. This, in the post actionalwayscalls the jobA, like this:build job: 'A/master', parameters: [ string(name: 'p1', value: params["p1"]) ]The error occures by starting the jobA, saying:Error when executing success post condition: hudson.AbortException: No item named A/master foundAnd also, by listing the parent folders, it is evident that there is no folder for jobA.How could I solve this problem?KI
Jenkins pipeline - Trigger a new build for a given job
nameisn't a property ofscript.Refactor to bemaster: - step: name: "SSH Deploy to production web" script: - pipe: atlassian/ssh-run:0.2.6 variables: SSH_USER: $SSH_USER SERVER: $SSH_SERVER COMMAND: $SSH_COMMAND PORT: $SSH_PORTShareFollowansweredDec 6, 2020 at 16:07Nick HammondNick Hammond31322 silver badges1313 bronze badgesAdd a comment|
I was trying to SSH to my server and pull the code and do some configuration stuff, each time code is pushed to master branch. I defined all of my repository variables used in this yaml file. I also added ssh key, added host in the list of known hosts and fetched fingerprint.This is mybitbucket-pipelines.ymlfile:image: atlassian/default-image:2 pipelines: branches: master: - step: script: - name: "SSH Deploy to production web" - pipe: atlassian/ssh-run:0.2.6 variables: SSH_USER: $SSH_USER SERVER: $SSH_SERVER COMMAND: $SSH_COMMAND PORT: $SSH_PORTThe error I get is:I checked my yml file usingbitbucket validatorand everything seems to be OK. I would appreciate any help since I just started using bitbucket pipelines.
Bitbucket ssh pipeline fails - Missing or empty command string
You can see the details of error at here.@{activity('Proc source').error.message}This expression works.Is errorCode saved to your table?Make sure your activity name is correct.ShareFollowansweredAug 5, 2020 at 7:05Steve JohnsonSteve Johnson8,32511 gold badge77 silver badges1818 bronze badges6Hi, thanks for your help! For some reason the error button you've pointed out isn't showing for me. The input and output options are, but the error button is simply missing. Any idea why that might be?–AwareAug 5, 2020 at 8:23Is status of your copy data activity Failed?–Steve JohnsonAug 5, 2020 at 8:43I'm using a databricks python script as the first job, not a copy data activity and that's the job that's failing, if that makes any difference.–AwareAug 5, 2020 at 9:15The strange thing is, if I remove the stored proc part of the pipeline, trigger it again and then head to pipeline runs in the monitoring section, It sort-of-but-not-really gives me an error: "Operation on target ExtractMetadata (name of the databricks python operation) failed:" Then there's nothing underneath that. Is it possible it's just not throwing an error at all?–AwareAug 5, 2020 at 9:22I haven't used databricks python ever.Maybe this don't give you error message.Do you get error code?–Steve JohnsonAug 5, 2020 at 9:41|Show1more comment
Going to try asking this here.I'm trying to write an error message from an Azure Data Factory pipeline to a table in SQL server. It needs to capture the error message from a Databricks Python job. I can't find any official documentation and the method I have found fromthis source:@{activity('Proc source').error.message}..doesn't write anything to the table. Just a blank string with no explanation.Why data factory doesn't just have an area where you can view the details of errors instead of just saying "Failed" is beyond me. Or if it does, it's hidden away.Does anyone have any ideas?
Reading/writing error messages from data factory pipelines
I found the solution to be the following:AsPipelineAccessTokenis a pipeline variable it should be encased in brackets.Furthermore, as I keep a PAT within that variable I have to enclose it in quotes so it gets piped as a string toaz devops login.Finally the solution is this:"$(PipelineAccessToken)" | az devops loginShareFollowansweredAug 3, 2020 at 5:54DanielDaniel1144 bronze badges0Add a comment|
I'm trying to create a release pipeline that will use Azure CLI to update a variable defined in Pipelines-> Library within a variable group.I can update the variable directly from my computer using a PAT(saved within $PipelineAccessToken) I generated from my user account.This it the script the Agent executes during the Release Pipeline:$PipelineAccessToken | az devops login az pipelines variable-group variable update --org "https://dev.azure.com/[myOrganization]" --project [myProject] --group-id [groupId] --name [variableName] --value [newValue]Azure Agent throws me this errorTF400813: The user '' is not authorized to access this resource.What am I doing wrong?
F400813: The user '' is not authorized to access this resource
I haven't seen there is a limitation for training dataset size. May I know how you do the pipeline? If you are using Azure Machine Learning Designer, could you please try the enterprise version?https://learn.microsoft.com/en-us/azure/machine-learning/concept-ml-pipelines#building-pipelines-with-the-designerAlso, I have attached a tutorial here for large data pipeline:https://learn.microsoft.com/en-us/azure/machine-learning/tutorial-pipeline-batch-scoring-classificationShareFolloweditedJul 28, 2020 at 23:33Dharman♦31.9k2525 gold badges9191 silver badges139139 bronze badgesansweredJul 28, 2020 at 23:25Yutong TieYutong Tie47822 silver badges99 bronze badges1Yes, I am using the enterprise version.While training the model, the error is as follows: "User program failed with ColumnUniqueValuesExceededError: Number of unique values in column: "message" is greater than allowed."–sreshta rikkaJul 29, 2020 at 6:19Add a comment|
I want to train the model with binary logistic regression model,with a dataset of 3000 data points. while creating the pipeline , it fails at the training model step.Please help me in training the model with large dataset or retrain the model continuously.Also Do pipelines have any limitation on the dataset? if so, what is the limit
In azure ml pipeline , error while training the model with large dataset
You can use below aggregation with mongodb3.6and abovedb.resources.aggregate([ { "$match": { "type": "FUNC" } }, { "$lookup": { "from": "initiatives", "let": { "id": "$_id" }, "pipeline": [ { "$match": { "$expr": { "$in": ["$$id", "$ressources.function"] } } }, { "$unwind": "$ressources" }, { "$match": { "$expr": { "$eq": ["$ressources.function", "$$id"] } } }, { "$group": { "_id": "$ressources.function", "participation_sum": { "$sum": "$ressources.participating" } }} ], "as": "result" }} ])ShareFollowansweredFeb 10, 2019 at 14:11AshhAshh45.6k1515 gold badges107107 silver badges135135 bronze badges2thank you, it works. is there any solution for the older version like mongodb 2.2 ?–Ayoub kFeb 10, 2019 at 14:23Yes can be done using some javascript tricks but the$lookupwas introduced in 3.2 and above and the good way to do.–AshhFeb 10, 2019 at 14:26Add a comment|
I have two collectionsinitiativesandresources:initiativedocument example:{ "_id" : ObjectId("5b101caddcab7850a4ba32eb"), "name" : "AI4CSR", "ressources" : [ { "function" : ObjectId("5c3ddf072430c46dacd75dbb"), "participating" : 0.1, }, { "function" : ObjectId("5c3ddf072430c46dacd75dbc"), "participating" : 5, }, { "function" : ObjectId("5c3ddf072430c46dacd75dbb"), "participating" : 12, }, { "function" : ObjectId("5c3ddf072430c46dacd75dbd"), "participating" : 2, }, ], }and aresourcedocument:{ "_id" : ObjectId("5c3ddf072430c46dacd75dbc"), "name" : "Statistician", "type" : "FUNC", }so i want to return eachresourcewith the sum ofparticipatingis have. and to that i need to join the two collection.db.resources.aggregate([ { "$match": { type: "FUNC" } }, { "$lookup": { "from": "initiatives", "localField": "_id", "foreignField": "initiatives.resources", "as": "result" } }, ])but first i need tounwindthe foreign field array.example of the expected output:{ "function" : "Data Manager" "participation_sum": 50 } { "function" : "Statistician" "participation_sum": 1.5 } { "function" : "Supply Manage" "participation_sum": 0 }
Mongodb: lookup with field as array of objects [duplicate]
At this time, I am fairly sure this functionality is limited to Snowflake accounts on the Microsoft Azure cloud platform. You can create another trial account on Azure and try these steps again.ShareFollowansweredJul 22, 2020 at 17:28Suzy LockwoodSuzy Lockwood1,08055 silver badges66 bronze badgesAdd a comment|
I am trying to create a SnowPipe pipeline with external stage as 'Azure Blob storage'. I am following the below link and have carried out exactly all the steps properly,but, got stuck at a point where the notification integration have to be established with Azure Blob.https://community.snowflake.com/s/article/Building-Snowpipe-on-Azure-Blob-Storage-Using-Azure-Portal-Web-UI-for-Snowflake-Data-Warehousecreate notification integration SNOWPIPE_DEMO_EVENT enabled = true type = queue notification_provider = azure_storage_queue azure_storage_queue_primary_uri = '<your_storage_queue_url>' azure_tenant_id = '<your_directory_id>';I am getting this error, have tried multiple times the steps outlined in the above article but with no success.I have used correct tenant id and Azure Storage Queue url.SQL compilation error: invalid value [QUEUE - AZURE_STORAGE_QUEUE] for parameter 'Integration Type'I have selected GCP while creating a trial account in Snowflake and here in this case, I am trying to use Azure Blob Storage. I believe that Snowflake will handle this cross vendor platforms interoperability behind the hood so this should not be an issue. But, still thought of cross verifying this doubt. I am using a Trial Version of Snowflake (Enterprise Edition). Can trail version be an issue here? Looking for a help here.
Isuse creating Snowpipe on Azure Blob Storage
You approach is good enough by the way (in case that you will run the tests manually not via a CI tool).To shorten the URL you can do two things:Store the URL that you passed in the terminal in an environment variable incypress.jsonfile. It will be something like that:{ "env": { "local": "some_localhost_reference", "staging": "some_staging_reference", "production": "some_production_reference" } }By this you will need only to change the value in your test file to point to the environment that you want to run your tests against.P.S. you need to move thecy.visit()command to abeforeEach()block.Create a command to run cypress in yourpackage.jsonfile to be able to just typenpm run <script_name>(npm run cypressin this case) in your terminal to be able to fire the cypress runner up. And it will be something like that:"scripts": { "cypress": "cypress open" }If you are going to run the tests from CI, it will be dependent on the tool that you are going to use. It can do too much for you.ShareFollowansweredDec 8, 2020 at 2:15Moataz MahmoudMoataz Mahmoud122 bronze badgesAdd a comment|
I'm new to Cypress test and wonder if I could test multiple URLs with one test(sample.spec.js).Here is my sample.spec.jsdescribe("my first cypress test",()=>{ it('navigate to eat site', () => { cy.visit(Cypress.env('url')) }) })I defined env variable 'url' on cypress.json.I was going to use this command line to test multiple URLs with one test.node_modules/.bin/cypress run --spec cypress/integration/examples/sample.spec.js --env url=https://www.google.com --headedIs it possible to define an url array and test all of them using Jenkins Pipeline? Plus, please let me know if there's an way to shorten the command line above.Thank you.
Is it possible to test multiple URLs with one test(Cypress)?
You're better off forwarding from MEM to EX/ALU, which is a bolt on similar to forwarding from EX to EX.  (Also needed is forwarding from MEM to MEM for load followed by store with store value dependency.)Otherwise if you really want to do register write back in the MEM stage for an ALU instruction, you'll have to dual port the register file write path — because you will sometimes have one instruction doing write back in MEM while the previous instruction is doing write back in WB — i.e. in the same clock cycle.  This a complexity that the original designers chose to avoid.  True, that the original design had dual ported for read and single ported for write, but now you need dual read, and dual write.  Not undo-able but the complexity adds up.Still, if you don't have to implement the register file (i.e. just block diagram), then let's note that in the original design, there are two reg read 5-bit inputs and two 32-bit reg data outputs, and one 5-bit reg-write and one 32-bit write data inputs.So, you'd add one more 5-bit reg write "2" and one more 32-bit write data "2" inputs to the registers.  Then wire the appropriate things to those new inputs, namely the output of the ALU output accepted by the MEM stage.ShareFollowansweredJul 21, 2020 at 16:40Erik EidtErik Eidt25.1k22 gold badges3232 silver badges5555 bronze badgesAdd a comment|
I’m trying to figure out what are the changes needed in the data-path and in the forwarding, hardware to allow WB for regular ALU instructions (add, sub, etc..) from the MEM instead of waiting for one more cycle like the regular 5 stage MIPS that has a Mux that chose between the MEM and the ALU result. In this case the only instruction from WB we’ll be LW. Thank you!:)
allow WB from MEM stage to register file in 5 stage mips
Easy solution would be to bind mount local/var/www/htmlsomewhere inside of your container and copy those files to that mounted directory which is now accessible from your container.How you can create the bind mount depends on how you are launching your container (docker cli or docker-compose...)if you are using docker cli, you can use-vswitchdocker run -v /var/www/html:/somwhere/in/your/container ...example indocker-composeversion: "3.8" services: name-of-your-service: image: name-of-your-image volumes: - /var/www/html:/somwhere/in/your/container ...ShareFolloweditedJul 19, 2020 at 16:23answeredJul 19, 2020 at 16:17Matus DubravaMatus Dubrava14k22 gold badges3939 silver badges5959 bronze badges3I will try this, sounds good. One more questions is: where is the dist folder located after build finished?–b0ssJul 19, 2020 at 19:44Sorry, I am not that familiar with GoCD and I think that angular by default creates dist in the root of the project (but this is configurable so it depends on how you set it up, but this should be easily obtainable information, either by going through the relevant docs or by inspecting the container itself.–Matus DubravaJul 19, 2020 at 20:09this helps me i could mount the directory but I dont know where the dist folder which comes out of the dist directory is located so I can copy it... does anyone know?–b0ssJul 19, 2020 at 22:15Add a comment|
I have a GoCD instance running inside a docker container. I want to build an angular2 project by angular-cli through the pipeline of GoCD. This works well, but I need to copy the built dist folder from the docker container to "/var/www/html/" folder on thehostsystem.Im very new to GoCD and docker. Can anyone help me?Edit: One more questions is: where is the dist folder located after build finished?
GoCD Copy dist folder to host system
You will need to write your own version ofLengthFieldBasedFrameDecoder. That said we could also add a protected method that people could override to validate the "parsed" message length. This way the customisation would be minimal on the users end.ShareFollowansweredJul 6, 2020 at 15:47Norman MaurerNorman Maurer23.4k22 gold badges3434 silver badges3131 bronze badges2I was reading a lot of code examples from netty and also watching some videos in youtube AND aren't you literally the creator of netty? I can't believe I marked someone else's answer with the green thingy. I'm pretty sure you don't care about it but still pretty hilarious to me–pest mailJul 9, 2020 at 4:08haha not the creator but yes I am the project lead for a few years now ;)–Norman MaurerJul 9, 2020 at 6:32Add a comment|
I'm adding anew LengthFieldBasedFrameDecoder(64 * 1024, 0, 4)at the start of my pipeline and it works just fine but nothing ever happens when the received integer (the first 4 bytes of the packet that represents the length of the actual packet) is negative or more than64*1024which is the maximum possible length.I want theLengthFieldBasedFrameDecoderto somehow notify me when the size of the coming packet is bigger than64 * 1024or is less than1how can I achieve this?
Having more control on LengthFieldBasedFrameDecoder in netty
NuGet Error NU1101 means the package cannot be found on any sources.SolutionExamine the project's dependencies in Visual Studio to be sure you're using the correct package identifier and version number. Also check that the NuGet configuration identifies the package sources you are expected to be using. If you use packages that have Semantic Versioning 2.0.0, please make sure that you are using the V3 feed,https://api.nuget.org/v3/index.json, in the NuGet configuration.https://learn.microsoft.com/en-us/nuget/reference/errors-and-warnings/nu1101ShareFollowansweredJun 29, 2020 at 6:51Cece Dong - MSFTCece Dong - MSFT30.3k11 gold badge2727 silver badges4141 bronze badges0Add a comment|
When trying to run pipeline build on Azure DevOps,I'm receiving following error:##[error]The nuget command failed with exit code(1) and error(NU1101: Unable to find package ComponentSpace.Saml2.Licensed. No packages exist with this id in source(s): NuGetOrgCan someone point me to the article of how to include the licence? or can tell me how to fix it to pass the build?thanks
Azure DevOps pipeline missing ComponentSpace licence
How can i raising error while python coverage test on azure devops pipeline?This seems to be a known issue when we run the test using the command line/powershell/bash.Publishing test results with failed tests does not fail the buildNow, there is afailTaskOnFailedTestsparam for the Publish Test Results task:- task: PublishTestResults@2 inputs: testRunner: VSTest testResultsFiles: '**/*.trx' failTaskOnFailedTests: trueYou could try to use the taskPublish Test Resultswith paramfailTaskOnFailedTests: trueinstead ofPublishCodeCoverageResultsto resolve this issue.Please checkthis documentfor some more details:To publish test results for Python using YAML, seePythonin the Ecosystems section of these topics, which also includes examples for other languages.Hope this helps.ShareFollowansweredJun 26, 2020 at 2:45Leo LiuLeo Liu73.9k1010 gold badges119119 silver badges143143 bronze badgesAdd a comment|
My test code have some fails.but pipeline is ignore error and run successful.How can i raise error?azure-pipelines.yml... - script: | pipenv run coverage run --rcfile=coverage_config --source='.' manage.py test --keepdb pipenv run coverage xml displayName: 'Run tests' env: AWS_ACCESS_KEY_ID: $(AWS_ACCESS_KEY_ID) AWS_SECRET_ACCESS_KEY: $(AWS_SECRET_ACCESS_KEY) FLANB_DEBUG: 'True' - task: PublishCodeCoverageResults@1 inputs: codeCoverageTool: Cobertura summaryFileLocation: '$(System.DefaultWorkingDirectory)/**/coverage.xml' reportDirectory: '$(System.DefaultWorkingDirectory)/**/htmlcov' ...
How can i raising error while python coverage test on azure devops pipeline?
There are three kind of data dependencies, seewiki.I1 and I2, I2 and I3 -- read after write -- flow data dependencyI3 and I4, I4 and I5, I5 and I6 -- write after write -- output data dependencyShareFollowansweredJun 27, 2020 at 5:19ajitajit42033 silver badges99 bronze badgesAdd a comment|
I'm confused of following instructions. $zero is register always containing value of 0I1 : addi $t1, $t10, -10I2 : lw $t2, 0($t1)I3 : lw $t3, 0($t2)I4 : sw $t3, 0($t4)I5 : sub $t3, $zero, $t3I6 : addi $t3, $t3, 1How can I find data dependency here?There are some straightforward like $t1 in I1-I2but what about $t3, between I3, I4, I5, I6? I'm confused about it.Can anyone help me to find all dependencies here?
About Pipeline Architecture's Data Dependency
gridsearchcvchecks every split of the cross-validation score and gets the average of all those to find the better hyperparameter. You can usegrid_pipe.cv_results_to check every split score, average score and std score. it gives the details the result of every hyperparameter with cross-validation splits. so you can analyze that data for better understanding.ShareFollowansweredJun 9, 2020 at 16:16UdayUday29144 silver badges99 bronze badges4'params': [{'ridge__alpha': 10000}, {'ridge__alpha': 1000}, {'ridge__alpha': 100}, {'ridge__alpha': 10}, {'ridge__alpha': 1}, {'ridge__alpha': 0.1}],–user8330379Jun 9, 2020 at 17:12'mean_test_score': array([0.71258665, 0.85028674, 0.82807142, 0.75635483, 0.68739042, 0.6649867 ]), 'std_test_score': array([0.02610594, 0.04197133, 0.04905709, 0.04436738, 0.07058033, 0.0835176 ]), 'rank_test_score': array([4, 1, 2, 3, 5, 6])}–user8330379Jun 9, 2020 at 17:14Thank you! I guess you meant this. ridge_alpha = 1000 is the champion in the contest of cross validation using mean R^2 as criteria, and this is how gridsearchcv define as the best model. As for it loses R^2 competition to ridge_alpha being 100 on the whole training set or test set, this is just back luck. Right?–user8330379Jun 9, 2020 at 17:15Thank you very much!–user8330379Jun 9, 2020 at 18:01Add a comment|
I have tried gridsearchcvpipe = make_pipeline(StandardScaler(), Ridge()) param_grid = {'ridge__alpha': [1000, 100,10,1,0.1]} grid_pipe = GridSearchCV(pipe, param_grid, cv = 5)If i exclude 1000 in ridge_alpha, ridge_alpha = 100 is the best model. If I include 1000 in ridge_alpha, ridge_alpha being 1000 is the best model, however, this model has both higher rmse and lower R^2...Why does it not choose alpha = 100 even with 1000? I thought R^2 is the default criteria for regression with Ridge...
What is the criteria for selecting the best model in gridsearchcv with Ridge
The first thing you must understand is that Copy Activity is at least container-based. There is no one Copy Activity that can perform copy activities for all containers.For reading everything in the container, just set like this:(This can set all of the data in container 'test1' as source data.)If you want to copy data from multiple containers, then you need multiple Copy Activities, like this:Can my answer answer your question? Any doubts please let me know.:)ShareFollowansweredJun 9, 2020 at 2:01Cindy PauCindy Pau13.6k11 gold badge1616 silver badges2929 bronze badgesAdd a comment|
Does anyone know how I should configure the "Dataset" to read everything the Blob container has and to be able to perform the copy activity of each container?I'm thinking of using a "GetMetada", but I don't know how to configure it since the "LinkServices" configuration leaves me inside the container and I get an error because in the "Dataset" I must configure the container and I don't know what to put there.
Azure datafactory - pipeline, Dataset, container, linkservices
It seems there is no enough space on the disk where need to download the artifact. You need to check the disk space and clean the space on the machine, to make sure there is enough space to get the artifact.Also, if you want to download a few different subfolders of pipeline artifacts usingDownload Pipeline Artifacts taskfrom earlier stages in this pipeline, or from another pipeline, you could tryMatching Patterns:ShareFollowansweredJun 3, 2020 at 7:45Cece Dong - MSFTCece Dong - MSFT30.3k11 gold badge2727 silver badges4141 bronze badges0Add a comment|
I get an error 'There is not enough space on the disk' during download artifacts from CloudVault. Is it possible to increase the size of memory for artifacts that can be downloaded by task 'Download Artifacts - CloudVault' in Release pipeline? If not, are there simple ways to download a few different subfolders from CloudVault?
Download artifacts in Release Pipeline
It might be possible that the problem comes from the fact that I didn't use the '__' notation after the 'poly' in the param_grid.With this notation, it seems to work:param_grid = {'poly__degree': [1, 2, 3, 4, 5]}ShareFollowansweredJun 1, 2020 at 23:07l.u.d.0.v.i.cl.u.d.0.v.i.c1Add a comment|
I have some issues with the use of the Pipeline Tool and the GridSearchCV. I get the following error message: "TypeError: Last step of Pipeline should implement fit or be the string 'passthrough'. '1' (type ) doesn't".Do you see where is my mistake here?Here's my code:from sklearn.model_selection import GridSearchCV from sklearn.preprocessing import PolynomialFeatures from sklearn.linear_model import LinearRegression from sklearn.pipeline import Pipeline X = wage['age'][:, np.newaxis] y = wage['wage'][:, np.newaxis] degree = 2 model = Pipeline(steps=[('poly', PolynomialFeatures(degree)), ('linear', LinearRegression(fit_intercept=False))]) param_grid = {'poly': [1, 2, 3, 4, 5]} cv_model = GridSearchCV(model, param_grid, scoring='r2', cv=5, n_jobs=1) cv_model.fit(X, y)
Issues with Pipeline and GridSearchCV
Usedf.dropnawithhow=all:This will drop only columns which hasNanin all rows.In [111]: df.dropna(axis=1, how='all') Out[111]: y1 y3 y4 0 2 1 0.3 1 NaN 2 0.4 2 2 3 1.0 3 3 4 2.0 4 4 NaN NaN 5 5 NaN NaNShareFollowansweredMay 28, 2020 at 15:50Mayank PorwalMayank Porwal33.8k88 gold badges4141 silver badges6464 bronze badges1@AbimbolaOjikutu Please let me know if the answer worked.–Mayank PorwalMay 29, 2020 at 5:12Add a comment|
My data has 75130 rows × 36 columns, I plan to fill some column's 'NA' with mode filling and some with median. I just learned about the imputer and starts to practice it on my dataset.An example of my dataFrame: y1 y2 y3 y4 0 2 Nan 1 0.3 1 Nan Nan 2 0.4 2 2 Nan 3 1.0 3 3 Nan 4 2.0 4 4 Nan Nan Nan 5 5 Nan Nan NanAfter running a pipeline that carried out mode filling on y1 and median filling on y3, I want to dropna on y2 and y4 but the result I am getting is an empty dataframe with no values in rows and columnPlease what can i do apart from running a long line of code to fillna.
Could not dropna from a dataframe after filling some column missing values with a pipeline
The following script does the job (I added comments to make every step clear). The advantage is that this does not depend directly on the naming of the files, nor on the number of triplets. It only assumes that the directory only contains the files of interest and that they are sorted (which is the case for both in the question).filelist=(path_to_directory/*) # record the (ordered) list of files into an array for ((i=0; i<"${#filelist[@]}"; i+=3)); # loops from 0 to number of # files in increments of 3 do paste "${filelist[@]:i:3}" | awk -f awkfile.awk # paste 3 elements of the array, # with offset i and feed it to awk doneEdit: The code is grayed out after the slash "/", but I do not know how to change this?ShareFolloweditedMay 19, 2020 at 22:09answeredMay 19, 2020 at 21:55Patrick.BPatrick.B13555 bronze badgesAdd a comment|
Suppose I have a folder with 9 files, labeled: X1.txt, X2.txt, X3.txt,Y1.txt, Y2.txt, Y3.txt,Z1.txt, Z2.txt, Z3.txt For each triplet of files (X1, X2, X3), (Y1, Y2, Y3), (Z1, Z2, Z3) I need to merge them and then make AWK do (the same) action on it. Manually, I would need to write in the command line:paste X1.txt X2.txt X3.txt | awk -f awkfile.awk paste Y1.txt Y2.txt Y3.txt | awk -f awkfile.awk paste Z1.txt Z2.txt Z3.txt | awk -f awkfile.awkSince the above shows a clear pattern, I am wondering if there is a way to make this into a single command?
Apply AWK on multiple files with pattern
For now there's no task available to copy contents from Blob storage to VM directly. (Even the Azure File Copy task is only supported to copy local files to VM.)Assuming you have one Linux self-hosted agent, and a Target Linux VM:1.You can useAzure Powershell taskto download the files from blob storage. Check4c74356b41's answer fromhow to download file from azure storage blob.2.Then you can tryCopy Files Over SSH taskto copy the files to Linux VM.Also, you can choose to install self-hosted agent in your Azure VM directly (Similar tothis). After that you can easily use Azure Powershell task to download the files in current machine.ShareFollowansweredMay 18, 2020 at 7:55LoLanceLoLance26.8k11 gold badge4141 silver badges7474 bronze badgesAdd a comment|
i am trying to copy my artefact stored in blob storage to linux vm. I have found AzureFileCopy that works for windows vm but it was not working for linux vm because internally it use winRM command which only works in windows vm. can any one suggest who to achieve this using pipeline task.- task: AzureFileCopy@4 inputs: sourcePath: azureSubscription: destination: # Options: azureBlob, azureVMs storage: #containerName: # Required when destination == AzureBlob #blobPrefix: # Optional #resourceGroup: # Required when destination == AzureVMs #resourceFilteringMethod: 'machineNames' # Optional. Options: machineNames, tags #machineNames: # Optional #vmsAdminUserName: # Required when destination == AzureVMs #vmsAdminPassword: # Required when destination == AzureVMs #targetPath: # Required when destination == AzureVMs #additionalArgumentsForBlobCopy: # Optional #additionalArgumentsForVMCopy: # Optional #enableCopyPrerequisites: false # Optional #copyFilesInParallel: true # Optional #cleanTargetBeforeCopy: false # Optional #skipCACheck: true # Optional #sasTokenTimeOutInMinutes: # Optional
copy file to azure linux vm using pipeline task
I think there are few approaches you can take.If you need to consider data already in BigTable you can create a second input source and feed that into a transform as aside inputor useCoGroupByKeytransform to combine the two inputs.If you need to consider data already being written by your pipeline you probably need to maintain some state to keep track of vehicle_ids already written. You can consider using theBeam state APIfor the latter (note though that the state will be unique to a Window and a key).I'm not sure if any of these will be an exact fit for your application but wanted to give some pointers you can explore on.ShareFollowansweredMay 14, 2020 at 23:22chamikarachamikara1,93111 gold badge99 silver badges66 bronze badgesAdd a comment|
We are streaming data from KAFKA and storing the same to google Bigtable.Before writing to bigtable,a value needs to be caluculated based on the existing values from same table.When a vehcile_id comes from kafka, I will have to check if the data is present already for the vehicle_id in bigtable. Based on the datetime present in bigtable for the vehicle_id, trip Id will be calculated.PCollection<String> ids = pipeline.apply(KafkaIO.<String,String>read().....) PCollection<com.google.bigtable.v2.Row> BTread =pipeline.apply("read", BigtableIO.read().....)Any help on achieving the above requirement will be appreciated.Thanks.
Google data flow PCollection join
Add to pom this lines:<properties> <java.version>14</java.version> </properties>Check if you did withsystem.propertieslike thisAddsystem.propertiesto project folder with this line:java.runtime.version=14App structure:YourAppsystem.propertiessrcmain...ShareFolloweditedMay 6, 2020 at 21:37answeredMay 6, 2020 at 21:22AzJaAzJa11188 bronze badgesAdd a comment|
I use bitbucket pipline to deploy my application to heroku. I addedsystem.propertiesjava.runtime.version=14, but it did not help me.In heroku log: Java app detected -----> Installing JDK 1.8... done -----> Installing Maven 3.6.2... done -----> Executing Maven ..... Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.8.1:compile (default- compile) on project*: Fatal error compiling: invalid target release: 14 -> [Help 1] **My bitbucket pipline looks:image: maven:3.6.1 clone: depth: full pipelines: default: - step: name: Build and test image: maven:3 caches: - maven script: - mvn -B clean verify - step: name: Create artifact script: - tar czfv application.tgz pom.xml src/ artifacts: - application.tgz - step: name: Deploy to production deployment: production script: - pipe: atlassian/heroku-deploy:0.1.1 variables: HEROKU_API_KEY: $HEROKU_API_KEY HEROKU_APP_NAME: $HEROKU_APP_NAME ZIP_FILE: "application.tgz"
I wanted to deploy my java 14 application to Heroku but I encountered some problems with java version
solved by handling the environment from Config file itselfShareFollowansweredJul 5, 2020 at 11:23Amr SalemAmr Salem23711 gold badge77 silver badges2121 bronze badgesAdd a comment|
I was trying to make my package json script more dynamic and based on which env the e2e test is executed in pipeline it will trigger the relevant conf file"protractor": "xvfb-run --server-args='-screen 0 1920x1080x24' protractor src/test/tests/conf.js"i want to replace conf file name with something like process.env["NODE_ENV"]so if it was development for example code will be like"protractor": "xvfb-run --server-args='-screen 0 1920x1080x24' protractor src/test/tests/development.js"
How i can add AWS env varaible into my package json
You are actively sending the the output of thecURLcommands to/dev/null!!! It is obviously complaining about this. Send it tostdoutinstead:gifsicle -d 100 -l <(wget https://www.wpc.ncep.noaa.gov/basicwx/91fndfd_loop.gif -O -) <(wget https://www.wpc.ncep.noaa.gov/basicwx/92fndfd_loop.gif -O -) <(wget https://www.wpc.ncep.noaa.gov/basicwx/93fndfd_loop.gif -O -) -o $1ShareFollowansweredApr 23, 2020 at 20:00PoshiPoshi5,52733 gold badges1717 silver badges3434 bronze badgesAdd a comment|
I'm currently trying to write a program that uses wget to grab 3 files from the internet and merge them into a gif. Because I'm trying to avoid having temporary files I'm attempting to use pipeline substitution to solve this, however whenever I run the program I get the messages:"gifsicle:/dev/fd/63: empty""gifsicle:/dev/fd/62: empty""gifsicle:/dev/fd/61: empty"Below is the command in question:gifsicle -d 100 -l <(wget https://www.wpc.ncep.noaa.gov/basicwx/91fndfd_loop.gif -O /dev/null) <(wget https://www.wpc.ncep.noaa.gov/basicwx/92fndfd_loop.gif -O /dev/null) <(wget https://www.wpc.ncep.noaa.gov/basicwx/93fndfd_loop.gif -O /dev/null) -o $1
Bash: Failed retrieval while using pipeline substitution for gifsicle
Insidedef process_item(self, item, spider)of you pipeline, you can dospider.YOUR_SPIDER_VARIABLEto access any of variable of your Spiderdef process_item(self, item, spider): if record is None: print('storing item') self.store_db(item) return True elif record is not None: spider.VARIABLE_TO_INCREMENT raise DropItem("Item already exists: %s" % item['lien_du_bien'])ShareFollowansweredApr 22, 2020 at 15:30Umair AyubUmair Ayub20.2k1414 gold badges7575 silver badges150150 bronze badgesAdd a comment|
I'm scraping items and using postal codes (cp) in the urlsdef parse(self, response): liste = ['09', '81'] for counter in range(len(liste)): cp = liste[counter] for i in range(0, 2): user_agent = random.choices(user_agent_list) headers = { 'User-Agent': str(user_agent), "Connection": "close", } next_pagination_link ='https://www.seloger.com/list.htm?projects=2&types=1,2&places=[{cp:' + str(cp) + '}]&sort=d_dt_crea&enterprise=0&qsVersion=1.0&LISTING- LISTpg=' + str(i) + ''In the pipilines.py, I'm using this script in the process_item method to drop existing items in the sql database:if record is None: print('storing item') self.store_db(item) return True elif record is not None: raise DropItem("Item already exists: %s" % item['lien_du_bien'])What I want to do is: If an item is droped, I want to increment the variable counter in the spider by 1 in order to move to another postal code.Is there a way to do this ?
How to increment a counter in spider if condition in pipelines is True ? [SCRAPY-PYTHON]
It may depend on what you're using for your items in the foreach activity. But for example, if your foreach activity is looping over the content of a file that you have retrieved by a previous lookup activity - you can get the count of these items by for example:@activity('Lookup activity name').output.countShareFollowansweredApr 22, 2020 at 5:34CedersvedCedersved1,03511 gold badge88 silver badges2121 bronze badgesAdd a comment|
Is there a way to send the number of times the cycle was executed?that is, I have aFor Eachthat executes an ExePipeline and it has 6 activities and only to the last activity I need to send it the number of times that thefor eachwas executed.at the end ofFor Eachit shows how much data "ItemsCount" entered but I couldn't call that value in the last activity of the pipeline.someone to help me thanks.
For Each with internal activities Azure Datafactory
I would do it in the subscribe itself:Something like this:map((myStore) => { return myStore.getUsers().map(({name, lastName}) => { return { name: name, sureName: lastName } }); }) .subscribe(myModifiedStore => { console.log(myModifiedStore.getUsers()); if (myModifiedStore.getUsers().length === 0) { this.store.dispatch(new saveNameAction('James', 'Smith')); } });ShareFollowansweredApr 21, 2020 at 20:15AliF50AliF5017.9k11 gold badge2525 silver badges3939 bronze badgesAdd a comment|
I have this pipe line but in this point, is not efficient because is causing not reachable code in order to dispatch the action, I am thinking on a tap. What is the better way to integrate the action in a pipeline?map((myStore) => { return myStore.getUsers().map(({name, lastName}) => { return { name: name, sureName: lastName }             }); if (myStore.getUsers().length === 0) { this.store.dispatch(new saveNameAction('James', 'Smith')); } }) .subscribe();
How to check the length of service that return an array and map this value as a pipe line?
Would you be able to perhaps provide a code sample along with a stack trace detailing your error? This will help in better visualizing what you may be trying to achieve.Thisdocumentationprovides details on deleting entire collections or subcollections in Cloud Firestore. If you are using a larger collection, you have the option to delete data in smaller batches to avoid out-of-memory errors. Thecode snippetis somewhat simplified, but provides a method on deleting a collection in batches.ShareFollowansweredMay 17, 2020 at 1:58Jan LJan L26111 silver badge55 bronze badgesAdd a comment|
I have a function in Java which is reading the data from firestore collection and deleting them with fixed batch size. I want to execute this from dataflow, but when I add this in .apply I am getting compilation error: "The method apply(String, PTransform) in the type Pipeline is not applicable for the arguments (String, void)"How can we call such a function inside apply
Delete Firestore Collection Using Dataflow & Java
Solved By creating new variable includes the BUILD_STATUS global variableenvironment { DEFAULT_SUBJECT = 'Health Check: $BUILD_STATUS' }Then call this variable as shown belowemailext body: '$DEFAULT_CONTENT', to: '$DEFAULT_RECIPIENTS', subject: "${APP_NAME} ${DEFAULT_SUBJECT}", attachmentsPattern: "**/target/${APP_NAME}.jpg" }ShareFollowansweredApr 16, 2020 at 8:54Amr SalemAmr Salem23711 gold badge77 silver badges2121 bronze badgesAdd a comment|
Im trying to send an email within jenkinsfile, the email subject contain two variable one of them exists on jenkinsfile APP_NAME and the other one is jenkins Global variable BUILD_STATUSim getting null instead of the actual value for the build statusenvironment { mvnHome = tool name: 'myMvn', type: 'maven' mvnCMD = "${mvnHome}/bin/mvn" APP_NAME = 'test' } post { success { emailext body: '$DEFAULT_CONTENT', to: '$DEFAULT_RECIPIENTS', subject: "${APP_NAME} Health Check: ${env.BUILD_STATUS}", attachmentsPattern: "**/target/${APP_NAME}.jpg" } }when i changed the subject in the form below'$APP_NAME Health Check: $BUILD_STATUS' with single quote i got the actual build status but APP_NAME appears on email $APP_NAME instead of actual namehow i can solve this conflict BUILD_STATUS needs single quote but APP_NAME needs double quote
Unable to get BUILD_STATUS global varaible in jenkinsfile
GStreamer offers ateeelement for such cases. Note however that in most cases you will want aqueueafter each branch of a tee to prevent deadlocks. E.g.filesrc location=/home/videos/video1.avi ! avidemux name=demux demux.video_0 ! mpeg4videoparse ! avdec_mpeg4 ! nvvidconv ! video/x-raw,format=I420 ! tee name=mytee ! queue ! appsink name=mysink mytee. ! queue ! filesink location=out.rawShareFollowansweredApr 16, 2020 at 8:48Florian ZwochFlorian Zwoch7,00822 gold badges1212 silver badges2222 bronze badges3Hi, I tried that but usinglocation=out.mp4instead. Now, when I try to play the output video or when I inspect it withgst-discover-1.0, it gives me the next error:"Could not determine type of stream". Any idea of why it could be? Thank you.–SergioApr 16, 2020 at 15:24You cannot just rename a file and hope things fix itself. It you want to store raw video into a container you need a muxer for the desired format. Note that in GStreamer the mp4 muxer does not support raw video. The matroska muxer may be an alternative.–Florian ZwochApr 17, 2020 at 9:16I just tried it withmatroskamuxand works perfectly. The final pipeline is:ss << "filesrc location=/home/videos/video1.avi ! avidemux name=demux demux.video_0 ! mpeg4videoparse ! avdec_mpeg4 ! nvvidconv ! video/x-raw,format=I420 ! tee name=mytee ! queue ! appsink name=mysink mytee. ! queue ! matroskamux ! filesink location=/home/videos/out1.mkv";Thank you very much for your help.–SergioApr 18, 2020 at 12:48Add a comment|
I'm new to GStreamer and I'm trying to create a pipeline to display a video and record it at the same time. I've managed to make the display part using:ss << "filesrc location=/home/videos/video1.avi ! avidemux name=demux demux.video_0 ! mpeg4videoparse ! avdec_mpeg4 ! nvvidconv ! video/x-raw,format=I420 ! appsink name=mysink";Also, I've read thatfilesink location=somepathis used for saving data into a file but I don't know how combine it with the rest of the pipeline.So, how do I useappsinkandfilesinkin the same pipeline?
How to combine appsink and filesink using GStreamer?
I hope this is helpful, but perhaps just add a parameter to your "create_model" function. FOr example, here is a very basic create_model function that uses the activation function as its argument as the parameter that GridsearchCV is trying to help you tune.def create_model(activation_fn): # create model model = Sequential() model.add(Dense(30, input_dim=feats, activation=activation_fn, kernel_initializer='normal')) model.add(Dropout(0.2)) model.add(Dense(10, activation=activation_fn)) model.add(Dropout(0.2)) model.add(Dense(1, activation='linear')) # Compile model model.compile(loss='mean_squared_error', optimizer='adam', metrics=['mean_squared_error','mae']) return modelNow what you can do is modify this to have a second argument called model_type (or whatever you want to call it).def create_model(model_type = 'rfr'): if model_type == 'rfr': ...... elif model_type == 'xgb': ....... elif model_type == 'neural_network': .......Then in your params dictionary that is fed into the GridsearchCV that you call, you just give the model_type key a list of models that you want to tune (optimize over). Just make sure that within each block of code under a given "if" statement that you put in the proper code to create your desired model.ShareFollowansweredJan 7, 2021 at 12:43wjamdanf1234wjamdanf123414377 bronze badgesAdd a comment|
I would like to run different models using GridSearchCV.models = { "RandomForestRegressor": RandomForestRegressor(), "AdaBoostRegressor": AdaBoostRegressor(),} params = { "RandomForestRegressor": {"n_estimators": [10, 50, 75], "max_depth": [10, 20, 50], "max_features": ["auto","sqrt","log2"]}, "AdaBoostRegressor": {"n_estimators": [50, 100],"learning_rate": [0.01,0.1, 0.5],"loss": ["linear","square"]},}
GridSearchCV for Multiples Models
The problem is you use theattributeAdderclass to create an objectatt_adder, but didn't use this object with the dataframe. Just replaceattributeAdder.transform(df)byatt_adder.transform(df)will solve the problem.It works:import pandas as pd class attributeAdder: def __init__(self, add_target = True): self.add_target = add_target def fit(self, X, y=None): return self def transform(self, X): if self.add_target: X["failed"]=X["failures"].apply(lambda x: "No" if x==0 else "Yes") X.drop(columns=["failures"],inplace=True) return X df = pd.DataFrame({"failures":[0, 1, 1, 0]}) att_adder=attributeAdder() df=att_adder.transform(df) df.head()ShareFollowansweredApr 12, 2020 at 20:10HsuningHsuning8622 bronze badges1Thank you very much! Solved my problem–patelRApr 12, 2020 at 20:14Add a comment|
I am having trouble creating a custom transform that applies to a pandas dataframeclass attributeAdder(BaseEstimator,TransformerMixin): def __init__(self, add_target = True): self.add_target = add_target def fit(self, X, y=None): return self def transform(self, X) : if self.add_target: X["failed"]=X["failures"].apply(lambda x: 0 if x==0 else 1) X.drop(columns=["failures"],inplace=True) return X att_adder=attributeAdder() df=attributeAdder.transform(df) df.head()and I get this errorTypeError Traceback (most recent call last) <ipython-input-117-cc8d4ad8702f> in <module> 14 15 att_adder=attributeAdder() ---> 16 df=attributeAdder.transform(df) 17 df.head() 18 TypeError: transform() missing 1 required positional argument: 'X'Does anyone knows what the problem is with this code ? Thank you
Custom transformer Python
For the change incronto catch, you need to run your Jenkinsfile once manuallyin the correct branch. After that, check in "View Configuration" that your cron succeeded ("Build periodically" should be checked and contain the schedule).If it has not, it could be that, at the time when triggers are evaluated, yourenv.APP_NAMEdiffers from'DICTIONARY'.To debug theenv, you may add the following:println "env.APP_NAME is ${env.APP_NAME}" // will run before pipeline pipeline { agent { node {As a side-note, it's recommended usingHinstead of minutes, so not all hourly builds fall exactly on the hour:triggers { cron(env.APP_NAME == 'DICTIONARY' ? 'H 20 * * *' : '') }ShareFollowansweredApr 8, 2020 at 11:58MaratCMaratC6,56922 gold badges2121 silver badges2828 bronze badges1Hi, I'd have below referred jenkinsfile structure. How to add trigger in this case?: ...plan = { stage('Build tests') { sh "docker build --network=host --build-arg CACHE_BUST=\$(date +%s) -t ${DOCKER_IMAGE} ." } stage('Run tests') {.....–user2451016Jan 17 at 16:02Add a comment|
im using multibranch jenkins style each branch has its own jenkinsfile, i have added triggers in jenkinsfile but it didnt trigger anything on the specified time (8pm), im not sure if im missing somethingagent { node { label 'master' } } triggers { cron(env.APP_NAME == 'DICTIONARY' ? '00 20 * * *' : '') } stages { stage('SCM Checkout') { steps { git(branch: 'test', url: 'https://gitlab.testral.ba/amramework.git', poll: true, credentialsId: 'GitlabCred') } }
How i can trigger build from jenkinsfile using cron syntax?
After @desernaut pointing to the right direction, this is the answer:parameters['preprocessor__num__imputer__strategy'] = ['most_frequent','mean', 'median',]Thanks @desernaut!ShareFollowansweredApr 7, 2020 at 13:22DanDan14799 bronze badgesAdd a comment|
I'm a newbie in this DataScience realm and in order to organize my code I'm using pipeline.The snippet of the code I'm trying to organize follows:### Preprocessing ### # Preprocessing for numerical data numerical_transformer = Pipeline(steps=[ ('imputer', SimpleImputer()), ('scaler', StandardScaler()) ]) # Preprocessing for categorical data categorical_transformer = Pipeline(steps=[ ('imputer', SimpleImputer(strategy='most_frequent')), ('onehot', OneHotEncoder(handle_unknown='ignore', sparse=False)) ]) # Bundle preprocessing for numerical and categorical data preprocessor = ColumnTransformer( transformers=[ ('num', numerical_transformer, numerical_cols), ('cat', categorical_transformer, categorical_cols) ]) ### Model ### model = XGBRegressor(objective ='reg:squarederror', n_estimators=1000, learning_rate=0.05) ### Processing ### # Bundle preprocessing and modeling code in a pipeline my_pipeline = Pipeline(steps=[('preprocessor', preprocessor), ('model', model) ]) parameters = {} # => How to set the parameters for one of the parts of the numerical_transformer pipeline? # GridSearch CV = GridSearchCV(my_pipeline, parameters, scoring = 'neg_mean_absolute_error', n_jobs= 1) CV.fit(X_train, y_train)How can I change the parameters for the Imputer found in the numerical_transformer pipeline?Thank you,
Setting the parameters of an imputer within a three levels pipeline
I think you can follow this examplehereandthis example. These two simple examples demonstrate how to use input data pipeline. Hope This helps. Thanks!ShareFollowansweredApr 5, 2020 at 7:18Vishnuvardhan JanapatiVishnuvardhan Janapati3,14811 gold badge1717 silver badges2626 bronze badgesAdd a comment|
Using the input pipelines with tf.dataGenerating the Dataset using tf.data.Dataset.from_generator()The output I got after fitting the model with the train_dataset
How to use Keras tf.data with generator ( flow_from_dataframe ) ? to form a perfect input pipeline
solved by addingemulator command inside shell script file and executing with -d flag as shown belowsh (label: 'Building Emulator', returnStdout: true, script: "docker exec -d --privileged ${containerName} bash -c './root/start_emu.sh'")ShareFollowansweredJul 5, 2020 at 11:29Amr SalemAmr Salem23711 gold badge77 silver badges2121 bronze badgesAdd a comment|
Im running an emulator device inside docker container using jenkinsfile, one of the command is for launching the emulator and output is continuous streaming of log like the one below.+ docker exec mycontainer emulator -avd pixel emulator: ERROR: AdbHostServer.cpp:102: Unable to connect to adb daemon on port: 5037 pulseaudio: Failed to initialize PA contextaudio: Could not init `pa' audio driver emulator: ERROR: AdbHostServer.cpp:102: Unable to connect to adb daemon on port: 5037.....etcIssue is jenkins keep waiting for this step to end however it will never end, so how i can ignore the log and make jenkins move to next stage.here is samplestage('Building Docker') { steps { sh label: 'Creating Appium container', script: 'docker run --privileged -d -p 4750:4723 --name mycontainer amrka/ultimate:latest' sh label: 'Building Emulator', returnStdout: true(i try it with false as well), script: 'docker exec mycontainer emulator -avd pixel' } }
How i can ignore continuous log from shell command in jenkinsfile to move to next steps/stage?
You can enable some logging to have more information about what is really happening.Go to "Manage Jenkins" -> "System Log" -> "Add new log recorder" and give it a name then add the 3 following loggers:com.cloudbees.jenkins.GitHubPushTriggerorg.jenkinsci.plugins.github.webhook.WebhookManagercom.cloudbees.jenkins.GitHubWebHookNow you can click "Test hook" on the Github UI and check what is in the log. If there's nothing that obviously means that Github cannot reach your Jenkins instance but I guess if that's the case Github should be warning you that the trigger do not work well.ShareFollowansweredMar 24, 2020 at 8:06OpaOpa9111 silver badge33 bronze badgesAdd a comment|
I want to setup a Github webhook trigger to start the pipeline as soon as the commit is triggered.I've done the usual and set up webhook tohttp://Jenkins URL:port/github-webhook/but it doesn't seem to work.I've also added the repo in Github repo field in the configuration part of the pipeline job.What other settings do I need to set to make it work?
How to set up Jenkins Pipeline trigger on commit to Github SCM private repo?
As far as I know this functionality does not exist. However, for this specific question, you can just always run the undersampling and if your dataset is not imbalanced, the undersampler will simply have no effect (or very little).ShareFollowansweredMar 25, 2020 at 16:24ramobalramobal25122 silver badges99 bronze badgesAdd a comment|
I was wondering if it is possible to have a pipeline with mandatory elements and optional ones. And the optional ones are conditional. For example, that you can have a pipeline with downsampling element or you can have the same pipeline without downsampling. Sofrom imblearn.pipeline import Pipeline as IMBPipeline import xgboost as xgb from imblearn.under_sampling import RandomUnderSampler pipe = IMBPipeline([ ('sampling',RandomUnderSampler()), ('clf', xgb.XGBClassifier(**params, n_jobs=-1)) ])and you only have the sampling part if you have an imbalanced dataset for example. But the sampling part is still in the pipeline, just conditional. Is there anything like this?
Conditional elements in a Python Pipeline
As they usescript_to_run: |- pip install pipenv export PATH=$(python -m site --user-base)/bin:$PATHlooks like it's running inbash. You may then try e.g.$JENKINS_HOMEor$BRANCH_NAMEand see if it works.ShareFollowansweredMar 12, 2020 at 17:21MaratCMaratC6,56922 gold badges2121 silver badges2828 bronze badgesAdd a comment|
Is there any way to capture the users email address who ran the build and put it into the jenkins yml config file? in other words, can I use a standard JENKINS ENV VAR in my yaml config file that is read into my pipeline? I am using this plugin:https://engineering.salesforce.com/open-sourcing-the-jenkins-config-driven-pipelines-plugin-9c0becaa5f79
can I use Jenkins variables inside a yaml config file?
And My Answer!After comparing the output files from VS deploy and what's in the deploy artifact I realized my .xml comment files were missing in the artifact, the same comments used by swagger.... UGH...Solution: Copy to Output Directory: Copy if newer - Lesson Learned!ShareFollowansweredMar 1, 2020 at 1:03WeisserHundWeisserHund1501010 bronze badgesAdd a comment|
Ran into an issue the other day that took me a few hours of head bashing to resolve. I hope if someone is having a similar issue I can save them some time. I have a new API i'm building and everything working as expected locally. Setup my dev and staging environments in azure then went about setting up my build and release pipelines. All of that is pretty straight forward as it's just a bunch of button clicking with a little yaml editing.after the deploy I try to hit my API and bam! error 500. After a bunch of reading I see similar issues regarding AspNetCoreModule vs AspNetCoreModuleV2 and some issues with InProcess vs OutOfProcess.I then deployed right from VS and amazingly everything worked. I can't do that EVERY time so went back to pipeline deploy and bam error 500 again."Detailed Error Information: Module AspNetCoreModule Notification ExecuteRequestHandler Handler aspNetCore Error Code 0x00000000"
netCore 3.0 Deploy Pipeline to Azure Error 500 AspNetCoreModule(V2) Handler: aspNetCore Error Cod: 0x00000000
You have to set up a webhook in a git repository and enable a flag in the job config.There is a simplewrite up with imagesShareFollowansweredFeb 25, 2020 at 13:44RamRam1,18466 silver badges2323 bronze badges1In the Which events would you like to trigger this webhook? section there is no option for notifying when a commit happens.–1r0n-manFeb 25, 2020 at 13:56Add a comment|
I have a jenkins pipeline job. I want my job to be triggered whenever there is a commit in my github repository.Note: I am able to do this as a freestyle project. Now I want it as a pipeline project.
Trigger a Jenkins Job when a git commit happens in my repo
Is it private instance of the Cloud Data Fusion or public? For public CDF instances, mysql database should be on public IP address and allow connections. For private CDF instances, make sure you have followed the instructions herehttps://cloud.google.com/data-fusion/docs/how-to/create-private-ip, mainly the vpc peering is setup.ShareFollowansweredFeb 27, 2020 at 19:20Sagar KapareSagar Kapare19644 bronze badgesAdd a comment|
I am connecting MYSQL database to google bigquery through datafusion pipeline,i used a jdbc driver jar file, i installed it and put details into source pipeline, at the time of browsing data at connection database(mysql) i put detail of host name,port name, user id and password properly now i am getting this error - "Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server." Help me with this....
Issue coming while connecting google datafusion pipeline to mysql database
I've been using Kubernetes in parallel with Drone.io for a few months and would recommend it to you as well. In the past I used Jenkins but had to migrate the pipelines due to its limitations. At the end of the day it's a matter of preference and it is going to depend a lot on the project you're working on.Pros for Drone.io:FreeOpen sourceBuilt on Docker (easy setup)Integrates with GitHub and BitBucketUsed in production by many large companies (good skill to add on your CV)ShareFollowansweredFeb 22, 2020 at 23:22Raul ButucRaul Butuc32911 silver badge1212 bronze badgesAdd a comment|
Closed.This question is seeking recommendations for software libraries, tutorials, tools, books, or other off-site resources. It does not meetStack Overflow guidelines. It is not currently accepting answers.We don’t allow questions seeking recommendations for software libraries, tutorials, tools, books, or other off-site resources. You can edit the question so it can be answered with facts and citations.Closed4 years ago.Improve this questionlike I said in the title I'm new to DevOps and CI/CD. I don't have much experience (except for online tutorials) and I'm looking to start a project (nothing huge) that will be using automated CI/CD pipelines for all microservices.Question is, what should I be using? There's an abundance of tools avilable.. Jenkins, CicleCI, TravisCI, DroneCI, GitLabCI, BitbucketCI, etc. It's becoming extremely confusing as to whether they are the same or not. Which of them would be the best to use in parallel with K8s, for many small microservice deployments?Sorry if it sounds silly. First question here on StackOverflow.
New to DevOps and CI/CD [closed]
No, if you use git as a shell command.sh "git"If you want to use a plugin functionality, you need to install it.ShareFollowansweredApr 25, 2020 at 18:53nandilovnandilov71988 silver badges1818 bronze badgesAdd a comment|
I am new to Jenkins and trying to understand the use of Plugins. I have installed Jenkins on a server which already has few software's like Git,terraform. Do I still need to install Plugins for Jenkins explicitly?
Jenkins Plugins
Check your pipeline. You may have added the build step but the deployment just fetches the code from the version control instead of the deployment. In order to solve that follow these steps:Specify a name for the output artifact at the build step.Select as input artifact the artifact you have placed into as output artifact at the build step.ShareFollowansweredFeb 15, 2020 at 13:10Dimitrios DesyllasDimitrios Desyllas9,5521616 gold badges7676 silver badges178178 bronze badgesAdd a comment|
In my existing aws pipeline I have the followingbuildspec.yml:version: 0.2 phases: build: commands: - cd media/web/front_dev - echo "Hello" > ../web/txt/hello.txt artifacts: files: - ./media/web/hello.txtAnd theappspec.ymlhas the followingversion: 0.0 os: linux files: - source: / destination: /webserver/src/publicBut the filehello.txtis not being deployed to the server on the deploy phase? Once I ssh into the machine I run the following commands:/webserver/src/public/media/web/hello.txtBut the file is not shown. Do you have any idea why?My pipeline initially had only a source and a deployment step then I edited it in order to have a codebuild step as well.
Why the file.txt is not being deployed to my server?
Linked Servicesare not technically defined as part of the pipeline but rather as part of the Data Factory itself.Try exporting the Data Factory from either Azure Portal (Export Template button on the data factory) orPowerShellfor the linked services information.This is common practice with a lot of ETL or ELT like process where the connection string information is defined and shared by the main instance and not defined in the jobs locally.ShareFollowansweredFeb 10, 2020 at 4:21DreadedFrostDreadedFrost2,83311 gold badge1212 silver badges3030 bronze badgesAdd a comment|
I have exported my all pipelines (azure data factory) and it is in .zip folder. Now it contains all my pipeline information (json format) with my linked services. When I want to import the zipped folder, I selectedpipeline from templateoption, after import I am having all my pipelines but there is no linked services imported, can any one let me know the issue.
Importing pipeline with linked services in azure portal
The best thing to do is to merge your tokens with whitespace and then use the-tokenize.whitespaceoption.So for instance if I had the raw text:This is a sentence., and I tokenize it into("This", "is", "a", "sentence", ".")I would merge that back into a string"This is a sentence ."and use thetokenize.whitespaceoption which will just split on whitespace.ShareFollowansweredFeb 7, 2020 at 21:09StanfordNLPHelpStanfordNLPHelp8,70911 gold badge1111 silver badges99 bronze badges1Makes sense, but I have a concern. I need this other tool to do some extensive morphological analysis and base phrase chunking that Stanford does not provide, so I need to use its tokenization scheme. However, one of the important things I need Stanford to do is give me the beginning and ending indices of each word as they stand in the original text--so I need an exact alignment between the list of tokens each tool uses. Merging my tokens with white space can do that but it will offset the indices for each word.–c_carmichaelFeb 8, 2020 at 3:16Add a comment|
I'm doing some natural language processing with Arabic. Since I'm working with a couple different NLP tools in tandem, I want to be able to be able to give raw text to a StanfordCoreNLP pipeline, but provide my own list of tokens rather than having it do the tokenization. Is there a way to do that?
Is there a way to give a StanfordCoreNLP pipeline raw text and a list of tokens as input?
You need an apk file in your input artifact to the 'Test' stage (AWS Device Farm action) and then specify the apk filename in 'App - optional' field while setting up the AWS Device Farm Test action in CodePipeline.ShareFollowansweredFeb 9, 2020 at 12:49shariqmawsshariqmaws8,46511 gold badge1818 silver badges3535 bronze badges1thank you for your reply. I have specifed the name "app-release.apk" in the 'App -optional fields' and the test stage fails with the following error: Did not find the file app-release.apk in the input artifacts ZIP file. Verify the file is stored in your pipeline's Amazon S3 artifact bucket: codepipeline-us-west-2-416983371916 key: is2_pipeline/BuildArtif/EYqfnM3 I'm not sure the build stage creates the zip file properly with the apk inside. How can I check the presence of my .apk file?–rubik90Feb 9, 2020 at 21:27Add a comment|
I've created a pipeline using codecommit->codebuild->codepipeline in order to build and test automatically my android app located on my github repository.But at first stage after build step the pipeline returns this error:I don't know the app location if it refers to .apk file because there is no .apk file into my repository.Can anyone help me?
AWS Codepipeline first stage
I've been learning Bitbucket Pipelines the last 2 days and a thing that popped right out of your configuration is that you shouldn't have spaces between the branch names.'{master, develop, feature/branchThatDontTrigger}'should be'{master,develop,feature/branchThatDontTrigger}'Also if you want you can include all feature branches withfeature/*.Here's one example of that:StackOverflow answerShareFollowansweredJun 13, 2020 at 22:37Sérgio AzevedoSérgio Azevedo32311 gold badge44 silver badges1313 bronze badgesAdd a comment|
Hi guys I've this configs into my bitbucket-pipelines.ymlpipelines: branches: '{master, develop, feature/branchThatDontTrigger}': - step: caches: - node script:......Both for master and develop the pipeline works but since when I've added the feature branches it doesn't start. I've also try writing the name of the branch without the prefixfeature/but nothing has changed.Can someone help me? Shoul I change something into settings repository?
Bitbucket pipelines doesn't triggered on feature branches
You never mentioned in what way it doesn't work for you but I assume it's because your SubString method never gets called but instead gets interpreted as text in your string. Try changing your line to the following instead and see if it does what you expect.You could try it out first by just writing the output to screen rather than (potentially) updating your AD object with the wrong value.Get-ADUser -abc -Properties Description | foreach { Write-Output "$($PSItem.Description.SubString(0,10))" }And then run your line once you've made sure you have what you need.Get-ADUser -abc -Properties Description | Set-ADUser -Description "$($PSItem.Description.SubString(0,10))"ShareFollowansweredJan 28, 2020 at 11:05notjustmenotjustme2,42622 gold badges2121 silver badges2828 bronze badgesAdd a comment|
I am trying to replace users description with a substring of his description. I want it to be just the first 10 letters. I try like this:Get-ADUser abc -Properties description | Set-ADUser -Description "($($PSItem.Description).substring(0,10))"Can you give me a hint how to make it work?
PowerShell - problem with substring in a pipeline
var express = require('express'); var app = express(); function getModel(model) { model.aggregate([{ $group: { _id: null, "price": { $sum: "$price", } } }]).exec((e, d) => { return JSON.stringify(d) }) } app.get('/', function(req, res) { console.log('marhaba'); res.send(getModel( ** Model ** ))) //== > here call the getModel function }); app.listen(3000, function() { console.log("Working on port 3000"); });ShareFollowansweredJan 27, 2020 at 3:37tuhin47tuhin475,54755 gold badges2020 silver badges3333 bronze badges0Add a comment|
I have a pipeline and its result I want to return it by an express method that is get or not i know if it is more advisable to send it by a socketthis is my file pipeline.js:function getModel(model) { model.aggregate([{ $group: { _id: null, "price": { $sum: "$price", } } }]).exec((e, d) => { return JSON.stringify(d) }) } module.exports = getModel;in the model.js file I'm going to call my pipeline.js file and therefore the functionmodel.js:const mongoose = require('mongoose'); const Schema = mongoose.Schema; const getModel = require('./pipeline'); const mySchema = new Schema({ user: { type: Schema.ObjectId, ref: 'User' }, namepet: String, type_of_service: String, characteristic_of_pet: String, price: Number }); const model = mongoose.model('Cites', mySchema); here is the function-> getModel(model); module.exports = model;and it works for me as I want the problem is that the result I have to send it by a method get and I have no idea how to do itHow can I send the result indicating the red arrow of the image by a get method?
How to return the result of a mongodb pipeline by a get methodo?
In addition to backticks`command`,command substitutioncan be done with$(command)or"$(command)", which I find easier to read, and allows for nesting.OUTPUT=$(ls -1) echo "${OUTPUT}" MULTILINE=$(ls \ -1) echo "${MULTILINE}"Quoting (") does matter to preservemulti-line variable values; it is optional on the right-hand side of an assignment, asword splitting is not performed, soOUTPUT=$(ls -1)would work fine.ShareFolloweditedMay 8, 2020 at 21:18vstepaniuk74066 silver badges1616 bronze badgesansweredJan 10, 2011 at 21:04Andy LesterAndy Lester92k1414 gold badges102102 silver badges154154 bronze badges2169Can we provide some separator for multi line output ?–AryanFeb 21, 2013 at 12:2628White space (or lack of whitespace) matters–AliApr 24, 2014 at 10:4010@timhc22, the curly braces are irrelevant; it's only the quotes that are important re: whether expansion results are string-split and glob-expanded before being passed to theechocommand.–Charles DuffyApr 21, 2015 at 15:375Ah thanks! So is there any benefit to the curly braces?–timhc22Apr 21, 2015 at 16:0123Curly braces can be used when the variable is immediately followed by more characters which could be interpreted as part of the variable name.e.g.${OUTPUT}foo. They are also required when performing inline string operations on the variable, such as${OUTPUT/foo/bar}–rich remerJun 1, 2016 at 23:16|Show16more comments
I have a pretty simple script that is something like the following:#!/bin/bash VAR1="$1" MOREF='sudo run command against $VAR1 | grep name | cut -c7-' echo $MOREFWhen I run this script from the command line and pass it the arguments, I am not getting any output. However, when I run the commands contained within the$MOREFvariable, I am able to get output.How can one take the results of a command that needs to be run within a script, save it to a variable, and then output that variable on the screen?
How to assign a variable with the first two characters of a passed argument in Bash [duplicate]
It's not documented in API keycloak docs, I found the solution by guessing what this could be :-) .headers = {'Authorization': 'Bearer %s' % token} groups_request = r hequests.get('http://yoursite:8080/auth/admin/realms/<realm_name>/users/<user_id>/groups', headers=headers)ShareFolloweditedJan 28, 2020 at 14:54answeredJan 28, 2020 at 11:19SwissNavySwissNavy64911 gold badge1212 silver badges2929 bronze badgesAdd a comment|
My setup: docker containers for web and keycloak.web container: Django 2.2social-auth-app-django==3.1.0social-auth-core==3.2.0keycloak container: pulledjboss/keycloakimage.The question is about the code: I am usingsocial-auth-app-django(notpython-keycloak), login and logout are already functional. Is there a call that would give me the list of groups that the user belongs to (of course I am talking about not "django" users and groups but "keycloak" users and groups, which were set up via keycloak admin interface)? I have mypipelines.pyfile where I am trying to create a custom pipeline to pull this info out ofuserandbackendobjects, e.g.print("backend.get_user", backend.get_user(user_id))this gives me back:{'username': 'xxxx', 'email': 'xxxx', 'fullname': 'xxxx', 'first_name': 'xxxx', 'last_name': 'xxxx'}andprint("backend.extra_data", backend.extra_data(user, user_id, response))gives me back{'auth_time': 1579705033, 'access_token': 'xxxx', 'token_type': 'bearer'}several other things I tried do not return groups information. The documentationPython Social Auth’s documentationdid not help.How do I get the list of groups for a given user using whatever is available in social-auth-app-django?
keycloak and social-auth-app-django : how to get user's groups
you can set build result withcurrentBuild.resultstage('Version, Build and Test Updated Roles') { when { allOf { branch 'feature/ABC' expression { currentBuild.currentResult == 'SUCCESS' } } } steps { script { try { powershell script: ''' try { $env:BRANCH_NAME Invoke-Build -Task Version, BuildUpdatedRoles -VSTS -ErrorAction Stop } catch { Write-Output $PSItem exit $LastExitCode } ''' } catch (err) { currentBuild.result = 'UNSTABLE' } } } }ShareFollowansweredJan 22, 2020 at 16:17fredericrousfredericrous2,95311 gold badge2525 silver badges2727 bronze badges1Thanks for input. I just found out that the PS script required some changes in order for the build to fail (or unstable) if pester test failed. See new answer–Eric DunnJan 22, 2020 at 18:24Add a comment|
I have the following Jenkinsfile to run a Powershell Pester Test. How do I get the build result of 'UNSTABLE' if the pester tests do NOT pass?stage('Version, Build and Test Updated Roles') { when { allOf { branch 'feature/ABC' expression { currentBuild.currentResult == 'SUCCESS' } } } steps { powershell script: ''' try { $env:BRANCH_NAME Invoke-Build -Task Version, BuildUpdatedRoles -VSTS -ErrorAction Stop } catch { Write-Output $PSItem exit $LastExitCode } ''' } }
How do I 'UNSTABLE' a Jenkins pipeline build if Pester test fails?
check the file script is here with command sh 'ls' just after the git step. generally I would recommend not to use git step but checkout instead, it is more powerful and more reliablecheckout([ $class: 'GitSCM', branches: scm.branches, extensions: scm.extensions, userRemoteConfigs: [[ url: 'https://github.com/rk280392/pipeline_scripts.git' ]] ])is your script executable? you could usechmod +x puppet_master.shbefore running it with dot slash as prefix./puppet_master.shsh 'sh puppet_master.sh'ShareFollowansweredJan 21, 2020 at 14:50fredericrousfredericrous2,95311 gold badge2525 silver badges2727 bronze badgesAdd a comment|
I am new to jenkins and trying to write a pipeline. Everything is working when run with jobs, but facing issue with pipeline. My script which should run after checking out from github returns file not found. Could anyone help please. Attached is the image of the log.https://i.stack.imgur.com/LuxGn.pngBelow is the code sample I am trying to execute.stage('puppet master config checkout') { steps { echo "cloning github" git "https://github.com/rk280392/pipeline_scripts.git" } } stage('puppet master config build') { steps { echo "running puppet master script" sh "puppet_master.sh" } }
shell script returning not found on jenkins master using pipeline as code
You can use the file parameter to pass a file to your job. For a declarative pipeline it looks like the followingpipeline { parameters { file(name: 'FILE', description: 'Some file to upload')ShareFollowansweredJan 21, 2020 at 12:01fredericrousfredericrous2,95311 gold badge2525 silver badges2727 bronze badgesAdd a comment|
I have file consist of 10 VM name.I want to pass the each VM name as parameter for a jenkins job.so that my jenkins job should perform task specified in each machine.Can some one suggest how it can be done.How it can be done using pipeline script.ExampleFile.txt consist below variablesVM1 VM2 VM3 .. vm10and I want to pass the values to jenkins job name called "Setupenvironment"Please suggest.
How to pass the Variable values(VM Name) form text file to jenkins job so that it should perform the task in each VM