Response
stringlengths
8
2k
Instruction
stringlengths
18
2k
Prompt
stringlengths
14
160
ClassA._();Is a named constructor with the name_. In dart,identifiers that start with underscore (_) are only visible inside the library it is contained in.The reason you would define a class this way, is that you want to prevent people from creating instances of this class.
This question already has answers here:The difference between the use of constructor " className() and className._()(2 answers)Closed2 years ago.I'm still getting used to Dart and I've searched for an answer to this, but I haven't found one. What does the second line, 'ClassA._();', of the class definition do? The list declaration is only shown to give the class a reason for being.class ClassA { ClassA._(); static final List<String> someList = ["A", "B", "C"]; }If it's a constructor, how would it be invoked? After doing some more sleuthing I see it creates a singleton. But when is the singleton created? Is there a way to get in front of the instantiation and make some mods?Thanks in advance for any help given.
Is this a Dart Constructor? [duplicate]
Solved! 2 Issues:Set the DNS for docke to 8.8.8.8 and 8.8.4.4Set the proxy in the Docker Desktop application, according to my proxy settings.:)
I have installed a gitlab runner within a protected network. In short:Runner installed on Windows 10 ProRunner registered with DockerDocker running (also tried with restart)Starting my pipeline with the runner, the pipeline starts, but I get this error:Using Docker executor with image ruby:2.6 ... Pulling docker image ruby:2.6 ... WARNING: Failed to pull image with policy "always": Error response from daemon: Get "https://registry-1.docker.io/v2/": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) (manager.go:203:15s) ERROR: Job failed: failed to pull image "ruby:2.6" with specified policies [always]: Error response from daemon: Get "https://registry-1.docker.io/v2/": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) (manager.go:203:15s)I then tried setting up the proxy following thisguide. Result => pipeline blocked, no result and blank screen (runner is not even found in my opinion).How can I fix this? If the runner was found by gitlab without proxy I don't think this is the problem. But still the pipeline is not executed.Clarification: connecting the runner to the open network executes the pipeline correctly.The proxy configuration of this Windows PC looks something like this:netsh winhttp set proxy 194.13X.X.X:9000 bypass-list="10.1XX.X.X/22" Thank you!
GitLab Runner in private network
Just in case the '^' indevelop^1is not correctly interpreted by the shell, try instead:git rev-parse origin/develop~1In your case, the first parent (^1) should be the same as the first generation ancestor of the named commit object (~1).Try also agit log origin/developin your pipeline, just to confirm that, in the context of said pipeline execution, there is indeed a history fetched and associated withorigin/develop.
I am executing the following command :npm run affected:build:dev -- --base="$(git rev-parse origin/develop^1)"with expected output to be the hash of the previous commit in develop ex once runnx affected:build --configuration=develop "--base=09a1a7cf53c00a2010d907574710c71674acdf80"this command works fine when I run on the terminal of my computer, but when running inside my bitbucket pipeline, it fails with the following error :+ npm run affected:build:dev -- --base="$(git rev-parse origin/develop^1)" fatal: ambiguous argument 'origin/develop^1': unknown revision or path not in the working tree. Use '--' to separate paths from revisions, like this: 'git <command> [<revision>...] -- [<file>...]' >[email protected]affected:build:dev > nx affected:build --configuration=develop "--base=origin/develop^1" fatal: Not a valid object name origin/develop^1 fatal: No such ref: 'origin/develop^1' nx affected:buildI don't understand the cause, my pipeline is running the same versions as my local.git version 2.25.1node v16.13.0npm 8.1.0what is the syntax that git is expecting ?
fatal: ambiguous argument when using git in my pipelines
You might be able to useonly:changes / except:changesto do that.You can have two jobs. One job that goes to folder-a if something under folder-a/* has changed and the other job goes to folder-b if something under folder-b/* has changed.
I have a repository on GitLab with a directory structure similar to this:folder-a\ -python-a.py\ folder-b\ -python-b.pyI am trying to set up a CI/CD pipeline on gitlab that will detect changes made to the python code, and deploy them to a production server. What I have currently is the user have to trigger the pipeline manually, and input in the folder name as a variable, which will then cause the pipeline to "cd" into the folder and deploy the code inside the folder.Is there any configuration or settings that can be added to the pipeline so whenever a Merge Request is merged to the main branch, the pipeline triggers and detects which code was changed, and then deploy the respective code without having the user to manually trigger it and inputting the folder name as a variable?
Trigger Gitlab CI/CD pipeline to deploy specific part of the repository
This happen when you try to ssh the first time to the server, you can remove host checking by this optionStrictHostKeyChecking=no, below is the complete command for your reference.ssh -o StrictHostKeyChecking=no $USER@$SERVER -p$PORT 'echo "connected to remote host as $USER"'PS: disabling host checking is not secure way to do, you can add server key to your ~/.ssh/known_host , run this command ssh-keyscan host1 , replace host1 to the host you want to connect.
Hi i have a problem configuring bitbucket pipeline with ssh login on my remote server.The output of error is:ssh_askpass: exec(/usr/bin/ssh-askpass): No such file or directoryHost key verification failedThese are the steps i follow:generate private and public keys (without password) on my server using this command:ssh-keygen -t rsa -b 4096add base64 encoded private key under Repository Settings->Pipelines->Deployments->Staging environmentspush file "my_known_hosts" on the repository created with:ssh-keyscan -t rsa myserverip > my_known_hostsI also tried to do another test:generate keys from Repository Settingscopy public key to authorized_keys file on my remote servertype the ip of my remote server in "Known hosts" click fetch and addchmod 700 ~/.sshchmod 600 ~/.ssh/authorized_keysThis is how i configure pipeline ssh connectionimage: atlassian/default-image:latest pipelines: default: - step: name: Deploy to staging deployment: staging script: - echo "Deploying to staging environment" - mkdir -p ~/.ssh - cat ./my_known_hosts >> ~/.ssh/known_hosts - (umask 077 ; echo $SSH_KEY | base64 --decode > ~/.ssh/id_rsa) - ssh $USER@$SERVER -p$PORT 'echo "connected to remote host as $USER"'I'm trying all possible things but still can't connect.Can anyone help me?
Host key verification failed bitbucket pipeline
The issue was solved after updating Snakemake to version 6.5.2 from 5.30.1.
I have written Snakemake rule which runs Muscle (MSA-tool) to calculate multiple sequence alignment (MSA) for all files in a directory. The task is trivially parallel, as different files do not depend on each other. The problem is, that Snakemake runs this rule in n-number of "batches", where n is cores given to Snakemake as an argument:snakemake -j 4 msa.Snakemake starts with running 4 jobs in parallel and it waits until each one of them is finished before starting a new "batch" of 4 jobs. This wastes CPU time, as the input files vary a lot in size and their MSA calculation time can vary from seconds to minutes. Resulting in following execution flow:job1|----- |job5|----- |...|-> job2|--- |job6|-------- |...|-> job3|----------------|job7|-- |...|-> job4|- |job8|----------|...|->How could I tell Snakemake to truly parallelize the jobs?CLUSTER_IDS, = glob_wildcards(os.path.join(WORK_DIR, "fasta", "{id}.fasta")) rule msa: input: expand(os.path.join(WORK_DIR, "msa", "{id}.afa"), id=CLUSTER_IDS) rule: input: os.path.join(WORK_DIR, "fasta", "{id}.fasta") output: os.path.join(WORK_DIR, "msa", "{id}.afa") shell: "{MUSCLE_PATH}/muscle3.8.31_i86darwin64 -in {input} -out {output}"
Snakemake waiting to finish all parallel jobs before starting next parallel job
You appear to be using theSkLearn2PMMLpackage for Scikit-Learn to PMML conversion work.AutoML uses custom transformer and estimator types in the fitted pipelines. The SkLearn2PMML package does not support them yet (for a list of supported types, seehere), so it fails with an error.In principle, AutoML support can be added to SkLearn2PMML, but it would require some development work. If you're interested in seeing that happen, please consider opening a proper feature request with the project.
I I have already completed learning using autosklearn.calssification.AutoSklearnClassifer(). I want to convert the trained model to PMML. After changing the SimpleClassificationPipeline to the sklearn pipeline in the trained model, I used the sklearn2pmml library, but it did not work. How can I convert a model trained through autosklearn into PMML?automl_modelautoml_model_to_pipelinemodle_to_pmml
How to convert automl model ( using autosklearn ) to pmml?
Open yourbuild.gradlefile and add the code below under android:android {lintOptions { checkReleaseBuilds false }For more detailsHere
Android build using AWS CodeBuild fails with following error:Execution failed for task ':app:lintVitalRelease'. > Could not resolve all files for configuration ':app:lintClassPath'. > Could not download groovy-all-2.4.15.jar (org.codehaus.groovy:groovy-all:2.4.15) > Could not get resource 'https://repo.maven.apache.org/maven2/org/codehaus/groovy/groovy-all/2.4.15/groovy-all-2.4.15.jar'. > Could not GET 'https://repo.maven.apache.org/maven2/org/codehaus/groovy/groovy-all/2.4.15/groovy-all-2.4.15.jar'. > Connection resetThe build works fine on my local machine. The above mentioned error occurs only while trying to build the apk using AWS CodeBuild.If this has something to do withjcenter shutdown, how to fix it?
React native android build fails with Execution failed for task ':app:lintVitalRelease'. Could not download groovy-all-2..4.15.jar
You can use$concatoperator to concat fields with delimiters,{ '$addFields': { 'f3': { '$concat': ['$f1', '-myDelimiter-', '$f2'] } } }I used myJoin just for example. My custom function perform a series of cryptographic computations and bytes manipulations.I don't think it is possible to integrate into a query for python lang in current MongoDB v4.4,There is a$functionoperator starting from MongoDB 4.4, There you can writejavascriptcode and execute in query, but, it's expensive for query performance.I would suggest you to do this operation after the query on the result.
Using MongoDB 4.0 I have to compute a new field basing on existing field. An example of documents in mycollection:{ _id: ..., f1: 'a', f2: 'b' }My python code, after connecting to DB:Please note thatmyJoinis here just for example. In real use scenario this function is quite complex (custom cryptographic calculations and bytes manipulations).def myJoin(left, right): return left+'-myDelimiter-'+right myPipeline = [ { '$match': { 'f1': { '$exists': True } } }, { '$addFields': { 'f3': myJoin('$f1', '$f2') } } ] mydb.mycollection.aggregate(myPipeline)However in DB I see:{ _id: ..., f1: 'a', f2: 'b', f3: '$f1-myDelimiter-$f2' }But I want:{ _id: ..., f1: 'a', f2: 'b', f3: 'a-myDelimiter-b' }For the moment I am using pipeline aggregations. But other strategies are welcome.
Pass an aggregate intermediate result field as python function argument in Pymongo
Dagster actually automatically stores these logs (in a structured format) for you. This is configurable by settingevent_log_storagein yourdagster.yamlfile (so you can choose what type of database it uses), but by default they all get stored in a local Sqlite database in your$DAGSTER_HOMEdirectory. Docs here:https://docs.dagster.io/deployment/dagster-instance#event-log-storageexplain a bit more about how this works.I'd also recommend checking outDagit, which works with these stored event logs to help visualize past sold execution (among many other uses!).
everyoneI've started using dagster for about a week or so now and I'm fascinated by the tool. However, I was wondering if it's possible to collect the metadata that is produced by dagster in the output.The regular dagster output goes like this:2021-06-17 15:12:30 - dagster - DEBUG - my_pipeline- 47989433-702c-4246-9c8d-ab4c8bab4be6 - 13936 - merge_transformations - LOADED_INPUT - Loaded input "clean_daag_df" using input manager "io_manager", from output "result" of step "clean_dzag"[...]2021-06-17 15:12:30 - dagster - DEBUG - my_pipeline - 47989433-702c-4246-9c8d-ab4c8bab4be6 - 13936 - merge_transformations - STEP_SUCCESS - Finished execution of step "merge_transformations" in 98ms.I'd like to know how to access this information, specially the start and finish time of each solid as well as the pipeline run id and, if possible, the id of each solid execution. (instead of just seeing the output in the screen, I'd like to export it to a file or to a database).Thanks in advance for any help.
Collect Metadata with Dagster
Dagster doesn't provide any global way to get access to objects in the solid context—you'll need to have your function accept a parameter and pass the logger to it from your solid.import logging lgr = logging.getLogger('console_logger') def random_func(logger=lgr): logger.info('in random func') print('\nhi\n')@solid def test_logs(context): context.log.info("Hello, world!") random_func(logger=context.log)
I can configure a custom logger (say, a file logger) which I can successfully use from within a solid fromcontext.log.info(for example). How can I use that same logger from within a standard Python function / class ?I am using the standardcolored_console_loggerso that I can directly see in the console what is happening. The idea is to swap it (or use alongside it) with another (custom) logger.Reproducible example: test_logging.pyfrom dagster import solid, pipeline, execute_pipeline, Field, ModeDefinition from dagster.loggers import colored_console_logger from random_func import random_func @solid def test_logs(context): context.log.info("Hello, world!") random_func() @pipeline(mode_defs=[ ModeDefinition(logger_defs={"console_logger": colored_console_logger}) ]) def test_pipeline(): test_logs() if __name__ == "__main__": execute_pipeline(test_pipeline, run_config={ 'loggers': { 'console_logger': { 'config': { 'log_level': 'DEBUG', 'name': 'console_logger', } } } })random_func.pyimport logging lgr = logging.getLogger('console_logger') def random_func(): lgr.info('in random func') print('\nhi\n')
Dagster use dagster's custom loggers in standard python functions called from solids
You can use the $BITBUCKET_STEP_TRIGGERER_UUID to resolve the actual username the following way:export BITBUCKET_TRIGGERER_USERNAME=$(curl -X GET -g "https://api.bitbucket.org/2.0/users/${BITBUCKET_STEP_TRIGGERER_UUID}" | jq --raw-output '.display_name')I found the answer in one of the comments here:https://jira.atlassian.com/browse/BCLOUD-16711
tengo creado un pipeline de CI/CD con Bitbucket Pipelines. Dentro del archivo bitbucket-pipelines.yml defini un custom pipe. Ej:custom: manual-deploy: - step: name: Manuel Deploy services: - docker caches: - maven script: - echo "Deploy..."I need to get the username or nickname of the user who triggered a custom step. How can I do this?I read in the documentation that the variable BITBUCKET_STEP_TRIGGERER_UUID exists, but I don't know how to identify which user this UUID belongs to
[Bitbucket][Pipelines] Get the name of the user who launched the pipeline
Using the rxjsiif, you can conditionally write observables but it is not used for dealing with operators.Since thesetLoadingis an operator in your case, it can not be used withiif. To usesetLoadingconditionally in pipe, you'll have to write something similar to -getData(query) .pipe( condition ? setLoading(this.store): tap(() => /* some other action */ ) )EDIT:In case you don't want to do anything in the else case and execute thetapalways, you need to use theidentityoperator.getData(query) .pipe( condition ? setLoading(this.store): identity, tap(() => /* some other action */ ) )
I'm trying to insert 2 pipe operators into pipe function, but I want to apply the first one by condition, else, only the second one will be applied.that's the way it looks now without the condition:getData(query).pipe(setLoding(this.store),tap(//some actions here...))setLoadingis an Akita pipe, and I would like it to be applied with some boolean condition.I tried to use rxjs'siif()but I received an error sincesetLodingis not a type ofSubscribableOrPromise.Can anyone think of another way?
How to add rxjs pipe operator dynamically with condition?
You need a/instead of a\:- cd CICD - ./build_script.ps1 x86 $PROJECT_PATH
I am running a pipeline job on a Windows VM, with default executor asshell. In this job, I wish to run a powershell script, which is located in a directory called CICD, by the following lines:- powershell .\CICD\build_script.ps1 x86 $PROJECT_PATH - .\CICD\build_script.ps1 x86 $PROJECT_PATH - '.\CICD\build_script.ps1 x86 $PROJECT_PATH'The script has 2 arguments: x86 and $PROJECT_PATH. Everytime I try to run one of these commands, I get this error:.\CICD\build_script.ps1 : The term '.\CICD\build_script.ps1' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. At line:1 char:1 + .\CICD\build_script.ps1 x86 C:\GitLab-Runner\builds\BRmNra1o\0\mihnea ... + ~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : ObjectNotFound: (.\CICD\build_script.ps1:String) [], CommandNotFoundException + FullyQualifiedErrorId : CommandNotFoundExceptionIs there a proper way to run this script?
How can I run a PowerShell script in GitLab pipeline?
Please try this expression:@split(item().name,'_')[3]Result:Reference:https://learn.microsoft.com/en-us/azure/data-factory/control-flow-expression-language-functions#split
in Azure Data factory ,i am getting "Common_EUR_AP_COMPCODE_YYY_MM_DD" as file name from "Get Metadata" activity which is then going thru "foreach loop" , now i want to take just "COMPCODE" bit of it inside foreach > "set variable" and ignore the rest. Can somebody please help on how to do it.i used many ways but the v close one was "@substring(item().name,add(indexof(item().name,''),3),add(lastindexof(item().name,''),1))"
Substring of a file name in ADF
What would be the good way of using dask distributed? Should I for example execute client.map 10 times (200 tasks each time)? Is there a way to "force" dask to execute a code from start to end before launching a new dask ?One option is to designate higher priority for earlier tasks.client.mapassigns a single priority value, so to specify specific priority for each task useclient.submit:futures = [] for n,f in enumerate(input_files): futures.append(client.submit, transforms, f, priority=-n)In this case, later tasks will have lower priority, so should be completed after tasks with higher priority are completed. Since you have multiple steps of transformations you will also want to assign later transformation functions a higher priority value.
I am using dask to apply several transformations on thousand of images. For each image, there are 5 transformations that need to be done sequentially. I would like to distribute this pipeline on a HPC cluster. I have 200 available CPU so I would like to be able to perform e.g.input_files = [a list of 2000 files]futures = client.map(transforms, input_files)Dask distributed would ideally run 200 transformations at a time. Write the desired output as soon a a processing is finished etc.However it seems that it does not work exactly this way. I observe that dask tends to start 200 tasks but only the first 3 steps, then e.g. 50 tasks or equivalent random processing.What would be the good way of using dask distributed? Should I for example execute client.map 10 times (200 tasks each time) ? Is there a way to "force" dask to execute a code from start to end before launching a new dask ?
Dask: build and execute efficient pipeline
The problem is when we$$files.versions.file_idaccess ids it will return array of array of ids so$inwill not match nested array of ids,I can see you are trying to project file details in same nested level, so direct lookup will not set that detail in nested array, you have to deconstruct the array first before set files details,$unwinddeconstructfilesarray$unwinddeconstructversionsarray$lookupwithfilescollection and passfiles.versions.file_idas localField$unwinddeconstructfiles.versions.file_idarray$groupbynameandfile nameand re-constructversionsarray$groupbynameonly and reconstructfilesarray{ $unwind: "$files" }, { $unwind: "$files.versions" }, { $lookup: { from: "files", localField: "files.versions.file_id", foreignField: "_id", as: "files.versions.file_id" } }, { $unwind: "$files.versions.file_id" }, { $group: { _id: { name: "$name", file_name: "$files.name" }, versions: { $push: "$files.versions" } } }, { $group: { _id: "$_id.name", files: { $push: { name: "$_id.file_name", versions: "$versions" } } } }Playground
I have the following schemaThing:{ name: "My thing", files: [ { name: "My file 1", versions: [ { file_id: ObjectId("blahblahblah") }, { file_id: ObjectId("blahblahblah") }, ], }, { name: "My file 2", versions: [ { file_id: ObjectId("blahblahblah") }, { file_id: ObjectId("blahblahblah") }, ], } ] }And then I a have aFileschema:{ _id: ObjectId("blahblah"), type: "image", size: 1234, }Thefile_idin theThingschema is a REF to the_idof theFileschema.I want to$lookupall the files inside myThing. So I started with this:{ "$lookup": { "from": "files", "let": { "files": "$files" }, "pipeline": [ { "$match": { "$expr": { "$in": [ "$_id", "$$files.versions.file_id" ] } } }. ], "as": "files.versions.file" } }But it's obviously wrong. Can someone help?
Mongodb $lookup nested objects with pipeline
For every job that you define, the code associated with the pipeline will always be fetched or cloned into the job environment before any of your script, before_script, or after_script sections are run. If you job should be running against the code in the repository, you don't have togit cloneorgit pullat all; it will happen automatically for you. The output you're seeing in the job is from this automatic fetch.If you need to, you can disable the automatic fetch using theGIT_STRATEGYvariable set to none:pull-code-job: stage: deploy variables: GIT_STRATEGY: none script: - cd /usr/share/nginx/html - git pull http://myuser:[email protected]/user/my-app.git masterFor this job, the repository will not be fetched/cloned from Gitlab. This is useful if you have another job in your pipeline that pulls down your code, then builds an artifact like npm dependencies, compiled binaries from the source code, etc.Since you're using theshellexecutor, the job is running directly on the host the runner is on (as opposed to within a Docker container). Looking at thegit usageoutput, there doesn't appear to be a-coption for that git version. Try upgradinggiton the host where yourgitlab-runnerALPHA is running and run your pipeline again.
I have created a pipeline to run a series of jobs that deploy my application to a sandbox environment automatically. My configuration of.gitlab-ci.ymlis:stages: - pull - build pull-code-job: stage: deploy script: - cd /usr/share/nginx/html - git pull http://myuser:[email protected]/user/my-app.git master build-code-job: stage: deploy script: - npm install - npm run alphaWhen I view the pipelines log, I always find that they have failed:Unknown option: -cI no callgit -c.My version os GitLab CE is 12.0.3 and the version of git that use runners is 1.7.1.
GitLab pipeline fail: Unknown option: -c
Finally got it. Python has-argument after which it reads arguments from stdinhttps://docs.python.org/3/using/cmdline.htmljupyter nbconvert --to python --stdout .\some_nb.ipynb| python - --argument_one=1
The use case is the following:convert jupyter notebook to pythonrun converted notebook on-the fly with additional argumentsWhat I have tried so far:jupyter nbconvert --to python --stdout .\some_nb.ipynb | pythonsome_nb.ipynbawaits for arguments viaargparseso normally I would do something like:python some_nb.py --argument_one=1When I do that:jupyter nbconvert --to python --stdout .\some_nb.ipynb | python --argument_one=1argument_oneis of course binded topythonand I am not sure how to properly pipeline this.
Run python code from stdin with additional script arguments in PowerShell
Note that your command has two sets of ' ' one inside another without backspacesIn any case,gcloud compute ssh build-server-turnoff-when-unused --zone=europe-west4-a --command "bash -c 'cd /mnt/disks/sdb/Project/ci_dev/tmp/deploy/images/project && for fileimg in *.raucb; do curl -F image="\${fileimg}" http://upload.url.com/upload ; done '"Notes:I use "bash -c" to pass to --command so it only spawns a single process remotely (avoid issues with environment)cd &&instead ofcd ;so that if cd fails, the for does not rundouble"inside single'inside double"to avoid excess escaping (alternate quotes trick)file extension not needed in foreach cycle as withfor fileimg in *.raucbthe var $fileimg will contain full filenamehad to quote the $ for the variable as\$or it gets interpreted by parent shell
I am currently running a pipeline that relies on Google Cloud Platform for building a file. we want to automate the last step which is the file upload.- "gcloud compute ssh build-server-turnoff-when-unused --zone=europe-west4-a --command 'cd /mnt/disks/sdb/Project/ci_dev/tmp/deploy/images/project && curl -F 'image=@*.raucb' http://upload.url.com/upload'"Now the issue is that it looks that curl -F doesn't accept wildcards, is it possible to get this done?
Uploading files in pipeline with wildcard cURL
pipeline.getStages()will show you the stages in the pipeline:>>> pipeline.getStages() [StringIndexer_84633f93b8f6, OneHotEncoder_6a01b7a7cdc1]Note that each list element is an object, not a string.
In the code below, a PySpark pipeline contains two tranformers. How to print out the names of these two transformers given the pipleline?from pyspark.ml.feature import (StringIndexer, OneHotEncoder) from pyspark.ml import Pipeline gender_indexer = StringIndexer(inputCol = 'Sex', outputCol = 'SexIndex') gender_encoder = OneHotEncoder(inputCol='SexIndex', outputCol = 'SexVec') pipeline = Pipeline(stages = [gender_indexer, gender_encoder])
PySpark - How to show what components are included in a Pipeline?
Is there a way to log detailed ftp traffic from microsoft hosted agent?You could try to use thenetwork tracingto collect the Ftp traffic log.You could add the configuration in the config file (e.g. web.config , app.config).For example: To collect theFTPWebRequestandFtpWebResponseinformation, you could usesystem.net.<configuration> <system.diagnostics> <sources> <source name="System.Net" tracemode="protocolonly" maxdatasize="1024"> <listeners> <add name="System.Net"/> </listeners> </source> </sources> <switches> <add name="System.Net" value="Information"/> </switches> <sharedListeners> <add name="System.Net" type="System.Diagnostics.TextWriterTraceListener" initializeData="network.log" /> </sharedListeners> <trace autoflush="true"/> </system.diagnostics> </configuration>Then it will create a network.log file.Since you are using theMicrosoft-Hosted agent, you could determine the file location and use thePublish Build Artifacts taskto upload the file as build artifacts.
Is there a way to log detailed ftp traffic from microsoft hosted agent? In my case, when running ui tests in release pipeline, screenshoots are uploaded to ftp server. It worked fine for few weeks unitl yesterday when I started getting following errors:There was no change on the ftp server side and in c# code used for uploading ftp files. Uploading files still works fine when i run tests locally and this issue happens only when they are run from pipeline, so I thought about comparing network traffic, looking for some clues.
Azure devops pipelines, inspect ftp traffic during release
To help you better understand (1), i.e. howOHEworks.Suppose you have 1 column with categorical data:df = pd.DataFrame({"categorical": ["a","b","a"]}) print(df) categorical 0 a 1 b 2 aThen you'll get one1per row (this will always be true for one column categorical data), but not necessarily on a per column basis:from sklearn.preprocessing import OneHotEncoder ohe = OneHotEncoder() ohe.fit(df) ohe_out = ohe.transform(df).todense() # ohe_df = pd.DataFrame(ohe_out, columns=ohe.get_feature_names(df.columns)) ohe_df = pd.DataFrame(ohe_out, columns=ohe.get_feature_names(["categorical"])) print(ohe_df) categorical_a categorical_b 0 1.0 0.0 1 0.0 1.0 2 1.0 0.0Should you add more data columns, e.g. a numerical column, this will hold true on a per column basis, but not for the whole row anymore:df = pd.DataFrame({"categorical":["a","b","a"],"nums":[0,1,0]}) print(df) categorical nums 0 a 0 1 b 1 2 a 0ohe.fit(df) ohe_out = ohe.transform(df).todense() # ohe_df = pd.DataFrame(ohe_out, columns=ohe.get_feature_names(df.columns)) ohe_df = pd.DataFrame(ohe_out, columns=ohe.get_feature_names(["categorical","nums"])) print(ohe_df) categorical_a categorical_b nums_0 nums_1 0 1.0 0.0 1.0 0.0 1 0.0 1.0 0.0 1.0 2 1.0 0.0 1.0 0.0
I am using sklearn pipelines to perform one-hot encoding:preprocess = make_column_transformer( (MinMaxScaler(),numeric_cols), (OneHotEncoder(),['country']) ) param_grid = { 'xgbclassifier__learning_rate': [0.01,0.005,0.001], } model = make_pipeline(preprocess,XGBClassifier()) # Initialize Grid Search Modelg model = GridSearchCV(model,param_grid = param_grid,scoring = 'roc_auc', verbose= 1,iid= True, refit = True,cv = 3) model.fit(X_train,y_train)To see then how the countries are one hot encoded I get the following ( I know there are two)pd.DataFrame(preprocess.fit_transform(X_test))The result of this is:A few questions:now correct me if wrong but in one hot encoding I thought it was a series of all 0's and just ONE number 1. why do I get several ones in one columnwhen I do model.predict(x_test) it applies the trasnformations as defined in the piepline fom training?how do I retrieve the feature names when I call fit_transform?
understanding how onehotencoder works - why do i get mutliple ones in ohe column?
Any ideas on where the unnormalised regression coefficients and intercept are stored in the pipeline object?They are not, because the pipeline doesn't do anything besides string together the transformer(s) and model. And the model object only knows about the scaled input data.Or equivalently, how can we compute them from the normalised regression coefficients, using the standard scaler?StandardScalerhas the attributesmean_andscale_(alsovar_), which contain the per-column means and standard deviations of the original data that are used to transform the data. So we have:y_hat = lr.coef_ * x_transformed + lr.intercept_ = lr.coef_ * (x - scaler.mean_) / scaler.scale_ + lr.intercept_ = (lr.coef_ / scaler.scale_) * x + (lr.intercept_ - lr.coef_ * scaler.mean_ / scaler.scale_)That is, your unnormalized regression coefficient islr.coef_ / scaler.scale_and the unnormalized intercept islr.intercept_ - lr.coef_ * scaler.mean_ / scaler.scale_.(I haven't tested that, so do check that it makes sense.)
I have been trying to understand the use of Sklearn Pipelines.I run the following code to scale my data and fit a linear regression within a Pipeline and plot the regression:pipe = make_pipeline(StandardScaler(), LinearRegression()) pipe.fit(x_train, y_train) xfit = np.linspace(0, 1.25, 50) #Fake data to plot straight line yfit = pipe.predict(xfit[:, np.newaxis]) plt.scatter(x_train, y_train) plt.plot(xfit, yfit, color='r')PlotLinearRegressionHowever, when trying to plot the linear regression by hand i.e. finding the linear regression coefficients and intercept from the LinearRegression object stored in the Pipeline with the following code. Dark magic is involved as this does not display the same regression (coef + intercept) as the ones used by the Pipeline (see graph).print("Linear Regression intercept: ", pipe['linearregression'].intercept_) print("Linear Regression coefficients: ", pipe['linearregression'].coef_)The StandardScaler might be involved as removing it from the pipeline allows to find the regression coefficients using the code cell above.Where the unnormalised regression coefficients and intercept are stored in the pipeline object? Or equivalently, how can we compute them from the normalised regression coefficients, using the standard scaler?
How Linear Regression coefficients are stored in Sklearn pipelines?
In Luigi you have events. I guess that you can use the event of failure and manage the error in there. You can generate the file that will work as a checkpoint for the failed task and log the message that you prefer.Official documentation:https://luigi.readthedocs.io/en/stable/tasks.html#[email protected]_handler(luigi.Event.FAILURE) def mourn_failure(task, exception): """Will be called directly after a failed execution of `run` on any JobTask subclass """ ...
I am beginning to use Luigi. I have build a pipeline that does several tasks, and I have been careful enough to make sure the tasks work well. So the pipeline works well. While building the pipeline, in the times that tasks had failures, they got reported with:(and I edited them till they work fine.So let's say I have a pipeline that doesTask1-->Task2--> Task3In that case, if Task2 fails, Task3 is not executed and the pipeline stops at that. Usually because there was an error writing the Task2.Now imagine that there are 5 "Task1"s , 5 "Task2"s and one "Task3". So Task3 is kind of a summary task.I would like my pipelinenot to stopwhenever there is a failure but skip (and perhaps log the failure) and continue with the next case. (These "failures" will not be because the task is badly written but because let's say the data that is input to these task is corrupted in a real case scenario)Something likeThere you can see that Task1 is executed before Task2. The Tasks marked in red are "failures".So what I would like the pipeline to do is to execute Tasks 1 and Tasks 2, log if there are failures and continue and finally summarize with Task3 (even including some kind of report that there were failures)How can I do this with Luigi?
how does luigi handles task failure?
The answer here is that an update was made to theAz.CosmosDbmodule so that it specifically requires version 1.9.4 (or higher) ofAz.Accounts. However, on the Azure DevOps hosted agent they use version 1.9.3.To fix, I changed the command that manually installs theAz.CosmosDbmodule -Install-Module -Name Az.CosmosDb -RequiredVersion 0.1.6 -AllowClobber -ForceNoteWhen I found on the PowerShell Gallery that all versions of theAz.CosmosDbmodule were <1.0.0I did some investigation and it turns out that the module is still in preview, despite Cosmos DB having existed for years. This explains why the module is not installed along side all otherAz.*modules when installingAz.What is very frustrating is that there is no mention anywhere on thedocumentationfor the module that it is in preview. If there were, it would probably have saved me a good chunk of time!
I have an Azure DevOps pipeline to rotate Cosmos DB account keys. To do this, I'm using PowerShell and theNew-AzCosmosDBAccountKeycmdlet.For some unknown reason, theAz.CosmosDBmodule is not installed withAz, so it needs to be installed manually each time the pipeline is run.Install-Module -Name Az.CosmosDb -AllowClobber -ForceWhen I run this locally everything works as expected, but within Azure DevOps I see an errorThe 'Get-AzCosmosDBAccountKey' command was found in the module 'Az.CosmosDB', but the module could not be loaded.What might be happening in Azure DevOps that differs from what is happening locally?
Azure DevOps PowerShell task fails to load 'Az.CosmosDB' module
Turns out the problem had something to do with the dependencies, specifically with the way they were installed. In the pipeline wasn't specified which version of thebuild-angularandcompiler-cliwas supposed to be installed, which caused the problem.After specifying the version numbers explicitly the error was solved.
Recently there has been an update to the pipeline, where we updated the docker version to 19.03, since the update every pipeline fails. The pipelines fail when they start the testing phase, without actually running the unit tests. They give the following message every time:An unhandled exception occurred: Cannot read property 'Minus' of undefined.The 'Minus' keyword has not been used in the code (It is an Angular app).There are no problems with the used dependencies.Any idea what can the problem be?
GitLab pipeline fails with exception: Cannot read property 'Minus' of undefined
You need to keep the argument names of the fit and transform function of the parent classdef fit(self, X, y): self._model.fit(X) return self def transform(self, x): return self._model.transform(x)
I am making a pipeline consisting of a tfidf vectorizer and an xgboost classifier and I am trying to find the optimal parameters for the vectorizer for my problem. I however get the following error:Cannot clone object Text2TfIdfTransformer(max_df=0.5, max_features=1000), as the constructor either does not set or modifies parameter max_df.Here is the code:class Text2TfIdfTransformer(BaseEstimator): def __init__(self, max_df = 1, max_features = 3000): self._model = TfidfVectorizer(max_df, max_features, sublinear_tf=True) pass def fit(self, data, df_y=None): self._model.fit(data) return self def transform(self, text): return self._model.transform(text) pl_xgb_tf_idf = Pipeline(steps=[('tfidf',Text2TfIdfTransformer()), ('xgboost', XGBClassifier(objective='multi:softmax'))]) parameters = {'tfidf__max_df':[.5,.6], 'tfidf__max_features': [1000]} grid = GridSearchCV(pl_xgb_tf_idf, param_grid=parameters, cv=5) grid.fit(X,labels)I'm not sure if I should declare the variables max_df and max_features when callinginitbut if I don't declare them here I get another error (that the estimator doesn't have any variables)I am sure I am missing something basic but I can't find what it is exactly, any help would be greatly appreciated!If there is some important information missing, please ask!
Cannot clone object error when using gridsearchCV in a pipeline
You could skipoutput:and just uselog:in the rule. These log files can be used as targets or as input to other rules.As per the doc:Log files can be used as input for other rules, just like any other output file. However, unlike output files, log files are not deleted upon error. This is obviously necessary in order to discover causes of errors which might become visible in the log file.So the code would look like:rule some_rule: input: "a.txt" log: "a.log" shell: "mycommand {input} > {log}"Advantage here is that, unlikeoutputfile, log file will be preserved in case of job failure. However this advantage is also a disadvantage, because if you rerun the pipeline, snakemake will not rerun the failed job, as the rule's output file (ie. log file here) is already present. So, unless log preservation is important when a job fails, you might be better served with solution suggested by Maarten-vd-Sande.
I would like to create a Snakemake rule where there are: input, log, shell sections. There is no output, I would like to catch the log only as a result of the command.
Snakemake: how to create rule without explicit output file, and only with specified input, and log files?
Generate a new SSH key. Go back to your Git account and remove the old SSH key and add the newly generated SSH key. For best practice always set SSH key password instead of leaving it blank.
When I use Azure DevOps Pipeline ,I received this error:"fatal:could not read username for ‘http:xx.xx.xx.xx:’ No such device or address, git fetch failed with exit code 128".When:I login to the agent machine;Use the commandgit clone http://xx.xx.x.x/x/xx/(Azure DevOps repo); andInput username and passwdIt shows“fatal:Authentication failed for ...”. I tried using PAT and SSH,But they both failed. The port 443 and 43 is isolation attentions: is http, not https
Azure DevOps ,git fetch failed with exit code 128,fatal:Authentication failed for
The answer to your question is in theGridSearchCV documentation. See the Attributes section:best_estimator_is where the best model is stored, so you can access it from there after you are done with fitting. You can use it by directly calling `CV.best_estimatory_', you can make a new reference to it or pickle it for later usingjoblib, ie.:import joblib joblib.dump(CV.best_estimator_, 'my_pipeline.pkl')Later you can load your model for further work:import joblib my_pipeline = joblib.load('my_pipeline.pkl')If you do not need the model, but only its hyperparameters you can access those from thebest_params_attribute, ie.:CV.best_params_which is a dictionary the best settings that you can use to construct a new pipeline.
I do a machine learning model training with pipelines, K-fold cross validation with Python and sklearn on a subset of my all historical data (omitting a test set), along the following:pipeline = Pipeline([("combiner", PolynomialFeatures()), ("dimred", PCA()), ("classifier", RandomForestClassifier())]) parameters = [...] CV = GridSearchCV(pipeline, parameters, cv=5, scoring="f1_weighted", refit=True, n_jobs=-1) CV.fit(train_X, train_y)So far, so good. However, at the end, I want to retrain the winning pipeline hyperparameter combination on my full X and y, without any cross validation. How could I have this? Simply applyingCV.fit(X, y)again would re-doing the whole alternating process with CV, which is obviously unnecessary. I could also parseCV.get_params()for the best combination hyperparameters and build up the pipeline again accordingly, but this somehow seems clumsy and unprofessional...
How to retrain pipeline with different data in Scikit-learn?
Echo sends its output to the 2nd foreach. I think you want to use write-host, which doesn't send its output to the rest of the pipeline. Echo would do the same thing in cmd or bash.1..2 | foreach { write-host $_; $_ } | foreach { $_*$_ } 1 1 2 4
I'm waiting that response of this expression1..2 | foreach { echo $_; $_ } | foreach { $_*$_ }be like1 1 2 4but receive1 1 4 4So i dont understand why is this happends.
When pipeline and foreach create value?
I think you need to add a remote server you can see below a sample of .yml fileimage: node:10 pipelines: branches: master: - step: name: Installation caches: - node script: - npm install artifacts: - node_modules/** # Save modules for next steps - step: name: Lint script: - npm run lint - step: name: Build script: - npm run build:production artifacts: - dist/** # Save build for next steps - step: name: Deploy script: - echo "$(ls -la)" - echo "$(ls -la dist)" - ssh user@your-server rm -rf /var/www/www.your-domain.net/html/your-app - scp -r dist/your-app user@your-server:/var/www/your-domain.at/html/your-app
I am trying to deploy my angular universal app on server through bitbucket pipeline. I have written scripts inbitbucket-pipelines.ymlas follows:pipelines: default: - step: name: Build app caches: - node script: - npm install - npm install -g @angular/cli - npm run build:ssr - npm run serve:ssr artifacts: - dist/**Mypackage.jsonhas following scripts:"scripts": { "ng": "ng", "start": "ng serve -c=dev -o", "build": "ng build", "test": "ng test", "lint": "ng lint", "e2e": "ng e2e", "build:ssr": "npm run build:client-and-server-bundles && npm run webpack:server", "serve:ssr": "node dist/server.js", "build:client-and-server-bundles": "ng build --prod && ng run my-app:server", "webpack:server": "webpack --config webpack.server.config.js --progress --colors" },Whennpm run serve:ssrexecutes, I see that it gives same output as on localhost i.e.Node server listening on http://localhost:4000. It gets stuck at this point. What am I doing wrong here?
How to write bitbucket pipeline for deploying my angular universal app?
Usechangelog: falseto disable changelog generation,more detailpipeline { agent any options { skipDefaultCheckout(true) } stages { stage('Build') { steps { checkout scm // build related tasks } } stage('Deploy') { when { branch "master" } steps { script { node("docker-ee") { script: checkout scm: [$class: 'GitSCM', branches: [[name: '*/master']], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: 'some.client.id', url: 'https://somegithuburl.git']]], changelog: false, poll: false } } } } } }
I have a jenkins pipeline which checkouts the project repository from github to build the project in the build stage, in the next deploy stage we checkout another repository in github to read the configurations pertaining to deployment.Since we checkout two times jenkins shows two workspaces along with two changesFor the build changes of the actual projectFor the deploy changes of the deploy configuration repoHow can I limit the workspace and changes only to1. For the build changes of the actual project?My pipeline looks something like below:pipeline { agent any options { skipDefaultCheckout(true) } stages { stage('Build') { steps { checkout scm // build related tasks } } stage('Deploy') { when { branch "master" } steps { script { node("docker-ee") { script: checkout([$class: 'GitSCM', branches: [[name: '*/master']], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: 'some.client.id', url: 'https://somegithuburl.git']]]) } } } } } }
how to not show 2 workspace and changes in jenkins?
Here a few ideas:use Azure Functions + Blob Trigger / Event Grid to process the JSON files in real time (every time a new JSON file arrives, it will trigger your function). Then you could either insert into the final table or on a temporary table.another idea would be combine Azure Functions + Blob Trigger / Event Grid to sink the data to a data lake. You can use ADF to sink it to SQL final tables.
I currently have pipelines developed that leverage Azure Data Factory for orchestration and Azure DataBricks for it's compute to perform the following actions... I receive tens of thousands of single record json files into Azure Blob in a real-time basis and on a 15 minute basis i check the folders for any new files and once found I load them into a dataframe using Databricks and load these into a single file in SQL DB before having other ADF jobs trigger stored procedures which then transform my data into final SQL tables.... We are looking to move away from Databricks as we are not using it for it's true capabilities but are of course paying the Databricks costs. Looking for ideas on other solutions to load tens of thousands of jsons into SQL DB (with minimal to no transformations) on a periodic (i.e. 15 minute) basis. We are a microsoft shop so not looking to necessarily move away from Azure tools.
Looking for an alternative solution to processing tens of thousands of JSONs from Azure Blob to Azure SQL DB
You can go for IF NOT EXISTS and create login and user accordingly.IF NOT EXISTS(SELECT name FROM [sys].[server_principals] WHERE name = N'test') BEGIN CREATE LOGIN test WITH password = 'TestPassword' END IF NOT EXISTS(SELECT name FROM [sys].[database_principals] WHERE name = N'test') BEGIN CREATE USER test for login test; END
I'm configuring a pipeline to deploy a test environment for an app, along with app services and storage there is a SQL database. I have already automated the creation of all of them in azure and is working.After the test is finished, the environment is destroyed to save resources until a new test is required.Now I need to add a SQL user to the test database while deploying, it's on the same server, to simplify the login already exists, so I want to assign the same to the new database.Something like this.Server1 already has user1 as login (was manually created)Database1 already has user1 as user (was manually assigned)Desired:Add User1 as user of database2There is a way to automate this while building the test environment?Thanks in advance.
Automate Azure SQL database creation and add a database user
Instead ofEnum.eachandMap.take, useEnum.mapandMap.getEnum.map(links, fn x -> x |> URIparser.make_domain() |> Map.get(:authority) end)
Good afternoon. I have a module to get the domain name from the link.defmodule URIparser do defstruct domains: [] def make_domain(uri) do case URI.parse(uri) do %URI{authority: nil} -> URI.parse("http://#{uri}") %URI{authority: _} -> URI.parse(uri) end end endAfter that I use the pipeline and get the domain I need.links = ["https://www.google.com/search?newwindow=1&sxsrf", "https://stackoverflow.com/questions/ask", "yahoo.com"] Enum.each(links, fn(x) -> URIparser.make_domain(x) |> Map.take([:authority]) |> Map.values |> IO.inspect end)That's what happens in the end:["google.com"] ["stackoverflow.com"] ["yahoo.com"] :okPlease tell us how to complement the pipeline and to put all domains into one list. Other solutions are also available.Example:%{domains: ["google.com", "stackoverflow.com", "yahoo.com"]}
Put the data in the list with Elixir pipeine and Enum
I'm guessing you are working with hosted agents, therefore, you need to configurekube.configon the hosted agent.in order to do that, runaz aks get-credentials --name $(CLUSTER_NAME) --resource-group $(RESOURCE_GROUP_NAME). The easiest way is to useAzure CLItask. Be aware that this task required authorization from Azure DevOps to Azure. More info can be foundhere.In case you are the subscription owner- select your subscription and click onAuthorize.When thekube.configconfigured on the hosted agent, you can run anykubectlcommand you wish (Using Powershell\Bash\CMD).
I'm currently working on a pipeline job that requires kubernetes access through powershell. The only issue is that I need to sign in for Az cli. For testing I'm using my personal credentials, clearly not a good definitive option. Are there any other options for Azure cli login that could be used instead?
Accessing Azure Cli from Prowershell in devops pipeline
In a nutshell, use '|' instead of ','.E.g..products[].attributes.artifactDetails.url = "abc" | .products[].cookbookName = "cookbook"
i have below json file, i need to update all the key values using one jq command.{ "changeDetails": { "chgNumber": "$ASKNOW_CRQ" }, "environmentType": "$ENV_TYPE", "fqdn": "$FQDN.visa.com", "products": [{ "action": "deploy", "attributes": { "artifactDetails": { "url": "$ARTIFACT_URL" }, "containers": "$CONTAINER_NAME" }, "productName": "$PACKAGE_ID", "cookbookName": "visa_springboot" }], "tpg": "O&I"}Below jq command works and able to update only below keys in json file. + {environmentType:"xz", fqdn:"abc", tpg:"mnop" }using below sample, i am able to update all the keys by running multiple jq commandscontents="$(jq '.products.action = "abcde"' test.json)" echo "${contents}" > test.jsonUsing below command it is creating multiple json file for each value update..products[].attributes.artifactDetails.url = "abc", .products[].cookbookName = "cookbook"I need only one JQ command to update all the values in Json file and output should be redirect to final.json file.
jq: one jq command to update multiple values in Json file
I you want to execute only part of the steps you can create Pipeline in runtime.partial_pipe = Pipeline(logreg.steps[:-1]) partial_pipe.fit(data)The steps of piple will be available instepsvariable of Pipeline object.
I've a machine learning pipeline --logreg = Pipeline([('vect', CountVectorizer(ngram_range=(1,1))), ('tfidf', TfidfTransformer(sublinear_tf=True, use_idf=True)), ('clf', LogisticRegression(n_jobs=-1, C=1e2, multi_class='ovr', solver='lbfgs', max_iter=1000))]) logreg.fit(X_train, y_train)I want to extract the feature matrix from the first two steps of the pipeline. Therefore, I tried to extract the sub-pipeline with first two steps in original pipeline. The following code gives error:logreg[:-1].fit(X)TypeError: 'Pipeline' object has no attribute 'getitem'How do I extract the first two steps of thePipelinewithout building a new pipeline for data transformation?
Error while extracting sub-pipeline using index from sklearn Pipeline
pipeline execute method returnsResponse<List<?>>whereas sync returnsvoid, however, to get response from pipeline using sync you have to capture individual responses, something like this.Response<Long> isDeleted = pipeline.del("test-1"); Response<Long> isSuccess = pipeline.hset("test-2", "a", "b"); Response<List<String>> hvals = pipeline.hvals("test-2"); pipeline.sync(); assertEquals(1, (long) isDeleted.get());
Redis pipeline has 2 options to send commands to database "exec" and "sync". I'd want to know what is the difference between them.I'll use Jedis ,java api for redis, for the examples.ExamplesExample 1try (Jedis resource = redisManager.getResource()) { Pipeline pipeline = resource.pipelined(); pipeline.multi(); pipeline.del("test-1"); pipeline.hset("test-2", "a", "b"); pipeline.exec(); }Example 2try (Jedis resource = redisManager.getResource()) { Pipeline pipeline = resource.pipelined(); pipeline.multi(); pipeline.del("test-1"); pipeline.hset("test-2", "a", "b"); pipeline.sync(); }
Redis SYNC and EXEC
This worked for meDeclare Job variables in variables sections eg:variables:PRIVATE-TOKEN: "TokenValue"PRIVATE_HEADER: "PRIVATE-TOKEN: ${PRIVATE-TOKEN}"Then under Script Section of the CI file used Curl command as followsscript:curl -k -H ${PRIVATE_HEADER} "https://gitlab.nbg992.poc.dcn.telekom.de/api/v4/projects?search=${CI_PROJECT_NAME}Using the{}braces around variable names made sure that":"issue doesn't show up
I am trying to pull a project ID using gitlab REST API v4, but when I issue the curl command, I get this error:"jobs:test:script config should be a string or an array of strings"The command is this one:curl -k -H "PRIVATE-TOKEN: PRIVATE-TOKEN" "https://gitlab.nbg992.poc.dcn.telekom.de/api/v4/projects?search=$CI_PROJECT_NAME"I tried to single quote it:'curl -k -H "PRIVATE-TOKEN: PRIVATE-TOKEN" "https://gitlab.nbg992.poc.dcn.telekom.de/api/v4/projects?search=$CI_PROJECT_NAME"'But when I do it, it removes the failure, but the command is ignored. So I tried to eval it like this:eval - 'curl -k -H "PRIVATE-TOKEN: PRIVATE-TOKEN" "https://gitlab.nbg992.poc.dcn.telekom.de/api/v4/projects?search=$CI_PROJECT_NAME"'When I do it, the failure its produced again:"jobs:test:script config should be a string or an array of strings"Any clue how should I issue the curl command? I think what is causing the failure is the colon within the"PRIVATE-TOKEN: PRIVATE-TOKEN"
curl command not working on gitlab-ci pipeline
Dynamic content can't accept multiplewildcardsorRegular expressionbased on my test.You have to using multiple activities to match the different types of your files.Or you could consider a workaround that using LookUp activity+For-each Activity.1.LookUp Activity loads all the file names from specific folder.(Child Item)2.Check the file format in the for-each activity condition.(usingendswithbuilt-in feature)3.If the file format matches the filter condition, then go into theTruebranch and configure it as dynamic path of dataset in the copy activity.
I have a condition where i have more than 2 types of files which i have to filter out. I can filter out 1 type using wildcard, something like: *.csv but cant do something like *.xls, *.zip.I have a pipeline which should convert csv, avro, dat files into .parquet format. But, folder also have .zip, excel, powerpoint files and i want them o be filtered out. Instead of using 3-4 activities i am finding if any way i can use (or) condition to filter out multiple extensions using wildcard option of data factory?
ADF-Azure Data factory multiple wild card filtering
The question's code seems to be after the following.library(dplyr) std <- 1 data_kids %>% filter(gender == 'muz') %>% summarise(age_A = median(age), height = age_A + rnorm(1, sd = std), gender = first(gender)) # age_A height gender #1 14 14.42912 muzTest data.set.seed(1234) data_kids <- data.frame(age = sample(10:18, 4), gender = rep(c('muz', 'baz'), 2))
I have a following code and I want R to give me back just the data.frame but it still gives back even the other value. Any idea how to simplify it?INPUT:new_guy_A <- assign('age_A', median(data_kids[gender=='muz',]$age)) %>% data.frame(age = age_A, height = age_A + rnorm(1, mean = 0, sd = std), gender = 'muz')OUTPUT:. age height gender 1 12.33566 12.33566 13.95272 muzThanks!
How to make a temporary name of a variable when using pipelines
You can use$condoperator here{ "$project": { "_id": 1, "sales": { "$cond": [ { "$eq": [{ "$size": "$sales" }, 0] }, { "sales": { "qty": 0, "soldFor": 0 } }, "$sales" ] } } }
I have the following stage in myMongoDBaggregation pipeline that returns the qty and sum of sales, which works fine:{ $lookup: { from: 'sales', let: { part: '$_id' }, pipeline: [ { $match: { $and: [{ $expr: { $eq: ['$partner', '$$part'] } }] } }, { $group: { _id: null, qty: { $sum: 1 }, soldFor: { $sum: '$soldFor' } } }, { $project: { _id: 0, qty: 1, soldFor: 1 } }], as: 'sales'}}, { $unwind: { path: '$sales', preserveNullAndEmptyArrays: true } }, { $project: { _id: 1, sales: 1 } }However, if there are no sales, then the$projectprojection returns an empty sales object, but what I'd really like is it to return a completed object, but with 0 - like this:{ sales: { qty: 0, soldFor: 0 } }
MongoDB to return formatted object when no results can be found
Thenew RegExpis evaluated in the client and included in the query as an absolute value, so you cannot include a document-dependent variable that way.To use a document field as a regular expression inside of a lookup pipeline, you will need to use a$exprclause with the$regexMatchaggregation operator. Note that the.*before and after the search string are implied, and therefore unnecessary.Something like:db.MSP_Prosper.aggregate([ { $match: { Cleavage_score: { $gte: 0.7 } } }, { $lookup: { from: "Uniprot_New_Entries", let: { "order_item": "$Protein_ID" }, pipeline: [ {$match: { $expr: { $regexMatch: { input: "$Uniprot_AC", regex: "$$order_item" }}}}, {$project: { _id: 0, test: "$$order_item", date: { name: "$Uniprot_ID", date: "$Min_Max_Of_the_Ft_chain" }}}, ], as: "cleavage_sites" }} ])Playground
Hi I am trying to use a variable that I defined in let to be used in the match lookup, but it returns no results when regex is used:It works like this:db.MSP_Prosper.aggregate([ { $match: {Cleavage_score: {$gte:0.7}}}, { $lookup: { from:"Uniprot_New_Entries", let: { order_item: new RegExp('.*' + "P62258" + '.*')}, pipeline: [{ $match: { Uniprot_AC : new RegExp('.*' + "P62258" + '.*') }}, { $project: { _id: 0, test: "$$order_item", date: { name: "$Uniprot_ID", date: "$Min_Max_Of_the_Ft_chain"} } }], as:"cleavage_sites" } }])but not when I try to use same variable defined in let function:db.MSP_Prosper.aggregate([ {$match: {Cleavage_score: {$gte:0.7}}}, { $lookup: { from:"Uniprot_New_Entries", let: { order_item: new RegExp('.*' + "P62258" + '.*')}, pipeline: [ { $match: { Uniprot_AC : "$$order_item" } }, { $project: { _id: 0, test: "$$order_item", date: { name: "$Uniprot_ID", date: "$Min_Max_Of_the_Ft_chain"} }}, ], as:"cleavage_sites" } } ])Ultimatelly I want to replcae the "P62258" with a local variable $Protein_IDHope you can help, Have tried out everything with no success.
MongoDB let variable not working in pipeline aggregation
In DevOps, it is expected you get an artifact in Build pipeline and then release that artifact into environments using aReleaseworkflow.You could try to build the frontend JavaScript application in theRelease pipeline, in Release pipeline, which can get artifacts from continuous integration systems such as Azure Pipelines, Jenkins, or TeamCity. You might also use version control systems such as Git or TFVC to store your artifacts. More details, please refer to the following link:https://learn.microsoft.com/en-us/azure/devops/pipelines/release/artifacts?view=azure-devops#sources
Consider the following development flow for a frontend JavaScript application:Whilst there are endless ways to design adevelopment->staging->productionpipeline, the above is fairly standard right?Given this, why is it that pipeline providers such as Bitbucket and Azure do not allow Environment dependant variables for thebuildstep?Like most JavaScript applications, our application is specifically built for the environment that it will be run in e.g.development,staging, andproduction. Each environment has their own uniquely defined set of variables, for example;APP_URLsets the URL that the application will be accessible under.The environment variables are specifically read during the build process of the application i.e. they arebuildtimevariables, notruntimevariables.Is there a reason why these providers do not support different environment variables for thebuildstep? This seems like such an obvious thing which makes me think that actually, our entire pipeline flow is incorrect and it is us that is doing this wrong... Can any suggest a way to overcome this issue? Ideally without setting these variables within thexyz-pipelines.ymlas the whole purpose of these variables is to keep them out of the repository...
What is the Correct Flow for a JavaScript Application Pipeline with Buildtime Variables?
My question is: If I follow the article, and perform these codes in step2 , do I miss this codeNo, the author is correct.When callingmodel.predict(), the author is using the Pipeline class functionpredict(), andas you can see in the docs...Apply transforms to the data, and predict with the final estimatorSo the X_test is first of all transformed and then used to predict the objective variable.
I saw on an articlehttps://towardsdatascience.com/multi-class-text-classification-with-sklearn-and-nltk-in-python-a-software-engineering-use-case-779d4a28ba5X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.25)Step1: instead of doing these steps one at a time, we can use a pipeline to complete them all at oncepipeline = Pipeline([('vect', tfidf), ('clf', RandomForestClassifier())])Step2 fitting our model and save it in a pickle for later usemodel = pipeline.fit(X_train, y_train) prediction= model.predict(X_test)Normally, if we do manuallyX_train= tfidf.fit_transform(X_train) X_test=tfidf.transform(X_test) model=RandomForestClassifier() model.fit(X_train,y_train) prediction=model.predict(X_test)My question is: If I follow the article, and perform these codes in step2 , do I miss this codeX_test=tfidf.transform(X_test)I don't see the author transform X_test. He just uses the original X_test. Is the author true?After
Why dont we transform X_test when using pipeline
'No parameters found' means that there were no fields on the record that could be mapped to database columns. Check your field-to-column mappings. If they look correct, it might be a problem with case. Try enablingEnclose Object Nameson theJDBCtab.
I am using StreamSets to build a pipeline to land data from a table that sits in a sqlserver db to a table on postgres db.JDBC Query Consumer --> Timestamp --> JDBC ProducerThe pipeline passes validation checks and runs successfully on preview mode. However, the problem is that the data does not land into the postgres table. I have checked the connection string and credentials and these should be right.This is the error it throws in the logs.No parameters found for record with YY SELECT 'XX' AS fieldA, YY AS fieldB, ZZ AS fieldC::rowCount:#; skippingHow can I resolve this issue?
StreamSets data not landing into table created on postgres db
Pipelines are independent entities - while you can execute "child" pipelines, there is no functional connection between the two. One way around this is to have the child pipeline write the value to some form of intermediate storage (blob storage, a SQL table, etc), and then have the "parent" pipeline read the value after the child pipeline completes. You should also make sure the Execute Pipeline activity has the "Wait on completion" property checked. If you don't want the value retained in the storage medium, you could have the parent pipeline delete the data once it has processed it.
I have a pipeline that executes another pipeline in azure data factory v2. In the executed (child) pipeline I assign a value to a variable I want returned in the master pipeline, is this possible? Thanks
Adfv2 reference child pipeline variable in master pipeline
The formula is correct but you are not reading the table right.instruction type frequency relative CPI 1 50% 3 2 20% 4 3 30% 5The first line means you have an instruction that uses 3 CPI and this instruction has a frequency of 50% which basically means every second instruction in your program is this instruction.Instruction 2 needs 4 CPI to be executed but occurs only 20% in your program. Instruction 3 needs 5 CPI but occurs 30%.Therefore you calculate0.5 * 3 + 0.2 * 4 + 0.3 + 5 = 3.8.It is basically the average CPI.Just imagine you have following program:INS_1 3 CPI INS_3 5 CPI INS_1 3 CPI INS_3 5 CPI INS_2 4 CPI INS_1 3 CPI INS_3 5 CPI INS_2 4 CPI INS_1 3 CPI INS_1 3 CPI -------------- 38 CPI / 10 (Instructions) = 3.8
I was reading some university material, and I found that to calculate the CPI (clock cycles per instruction) of a CPU, we use the following formula:CPI = Total execution cycles / executed instructions countthis is clear and does make sense, but for this example it says thatninstructions have been executed:instruction type frequency relative CPI 1 50% 3 2 20% 4 3 30% 5why is the total CPI equal to3*0.5+4*0.2+5*0.3 = 3.8and not3.8/3 = 1.26because following the above formula, there are a total of 3 executed instructions, or am I understanding the formula in a wrong way?
Calculating the total clock cycles per instruction in a CPU
There is no Logstash in Elastic cloud, you will need to create logstash instances in your infrastructure or in other cloud service, configure those instances to communicate with your elasticsearch nodes in elastic cloud and then you will be able to deploy the configurations through kibana.For more information on how to configure those logstash instances, see thedocumentation
I'm evaluating elasticsearch cloud and I tried creating a pipeline which consumes data from a data source and saves it to ES. I'm not sure how to enable the pipeline once created. There is no option to enable this or debug it. Please check the attachment
How to create a pipeline in logstash service in elastic cloud
You can access transformers using attributenamed_transformers_ofColumnTransformer. You have 2 transformers named'num'and'cat', sopreprocessor.named_transformers_['cat']gives you access to yourcat_transformer. Then usingnamed_stepsattribute ofPipelineyou can access yourOneHotEncodernamed'onehot'and itscategories_attribute:X = [['Male', 1], ['Female', 3], ['Female', 2]] preprocessor.fit_transform(X) Out[6]: array([[-1.22474487, 0. , 1. ], [ 1.22474487, 1. , 0. ], [ 0. , 1. , 0. ]]) preprocessor.named_transformers_['cat'].named_steps['onehot'].categories_ Out[7]: [array(['Female', 'Male'], dtype=object)]
Building pipelines with onehotencoding and when fitting and transforming to training/test set and converting into data frame it results in the features not having names. Is there any way to get names for each encoded feature?# Numerical column transformer num_transformer = Pipeline(steps=[ ('imputer', SimpleImputer(strategy='mean')), ('scaler', StandardScaler()) ]) # Categorical column transformer cat_transformer = Pipeline(steps=[ ('imputer', SimpleImputer(strategy='most_frequent')), ('onehot', OneHotEncoder(handle_unknown='ignore')) ]) # Preprocessing pipeline preprocessor = ColumnTransformer( transformers=[ ('num', num_transformer, numerical_cols), ('cat', cat_transformer, categorical_cols) ]) # Fitting the data and transforming the training & test set X_train_preprocessed = preprocessor.fit_transform(X_train) test_preprocessed = preprocessor.fit_transform(test)
Loss of feature names when onehotencoding
For the record mostly... (since I guess you've already found a solution). One path worth exploring is to use Kafka Connect. That's the reason the API was created, after all.I would try to create/write custom connectors to extract data from desired ERPs and feed it into the Kafka cluster:either directly from the ERP's database(s), if such access can be grantedor by trying to invoke various REST services/endpoints the ERP might exposeor maybe the ERP already publishes events to expose state change, etc.
We are working on a project which is to collect data from ERPs and store in our database, we studied lots of technologies on big data and came to a conclusion to use Apache Kafka to perform the task since Kafka ingest data in realtime.The issue is after researches, we don't still know how to go about it. We were able to create a pipeline to collect data from a file.txt but when it comes to ERPs, using their APIs.Can someone guide us? or Can anyone provide us a course that we could buy or watch that can help us? Thanks
How to build a pipeline that will collect data from ERPs like Odoo and store in an external database using Kafka
There is aSet Json Propertytask might help to do this. This task can update a specific json object's property with a specified value.You can try adding aSet Json Propertytask before azure Azure App Service Deploy task. And update webJobName value in webjob-publish-settings.json. Checkhere for how to use this task
I have one solution in Visual Studio which is being deployed as a webjob onto the Azure AppService. This is done manually and the same solution is deployed multiple times with different names for the webjob. The webjob is a trigerred one and internally specific functions are performed based on the arguments passed during the trigger. This needs to be achieved via release pipeline now. We are able to deploy the webjob through pipeline but the webjob name is static right now since the value is picked up fromwebjob-publish-settings.jsonof my checked in code. How to modify the name of the webjob during deployment?
Change webjob name during Azure Release deployment
You may use$num = [int]((Get-Content .\1.mpcpl | Select-Object -Last 1) -replace '^(\d).*', '$1')Notes(Get-Content .\1.mpcpl | Select-Object -Last 1)- reads the file and gets the last line(... -replace '^(\d).*', '$1')- gets the first digit from the returned line (NOTE: if the line does not start with a digit, it will fail as the output will be the whole line)[int]casts the string value to anint.Another way can be getting a match and then retrieving it from the default$matchesvariable:[IO.File]::ReadAllText($filepath) -match '(?m)^(\d).*\z' | Out-Null [int]$matches[1]The(?m)^(\d).*\zpattern gets the digit at the start of the last line into Group 1, hence$matches[1].
Let's say I have a file with a following content:1,first_string,somevalue 2,second_string,someothervalue n,n_nd_string,somemorevalueI need to Get-Content of this file, get the last string and get the number before the "," symbol (n in this case). (I'll just increment it and append n+1 string to this file, but it does not matter right now). I want all this stuff be done withpipelinecascadeI have come to this solution so far:[int]$number = Get-Content .\1.mpcpl | Select-Object -Last 1 | Select-String -Pattern '^(\d)' | ForEach-Object {$_.matches.value}It actually works, but I wonder if there are any ways of addressingSelect-String -Pattern '^(\d)'return object without using the foreach loop? Beacause I know that the return collection in my case will only consist of a 1 element (I'm selecting a single last string of a file and I get only one match)
How to address "Select-String -pattern" return object via pipeline without using foreach loop?
mkfifo /tmp/pipe p vlc /tmp/pipe cat /tmp/video.avi > /tmp/pipeWorks here like a charm
I have a program that open a video.m4v and send it to a fifo. I want to play this video through fifo using vlc but i can't and of course I'm new to vlcthis is my program which open the video.m4v and send it to a fifo:int main(){ char massage[2048]; char testPipe[256]; FILE* sourcefile = fopen("safe.mp4", "rb"); sprintf(testPipe, "/home/milliam/pipe"); mkfifo( testPipe, 0777); int saveErr = 0; FILE* des = fopen(testPipe,"wb"); saveErr = errno; printf(" %s\n",strerror(saveErr)); int numOfByte = 1; int result =0; while(numOfByte){ numOfByte = fread(massage,1, 2000,sourcefile); result = fwrite(massage, 1, 2000, des); memset(massage,0,2047); } return 0; }i tested the above program and understood it works fine(i.e. open a fifo and write data to it correctly)Now i want to open the pipe using vlc:~$ vlc fd://pipethis doesn't work and give me below error:core debug: no access modules matched core error: open of `fd://pipe' failedbut when i try:ffmpeg -i pipe -f asf - | vlc -then vlc play video normally.how i can make vlc to play video through pipe without using ffmpeg?*I think i need to config vlm.conf but I even didn't know where is it and how to config it?
how to change vlc's input to a fifo
You can try using thisrest apito do this. You can add a powershell task to the build pipeline job and then write the rest api to the script.PATCH https://dev.azure.com/{project}/_apis/wit/workitems/{id}?api-version=5.0Sample requset body:{ "op": "add", "path": "/relations/-", "value": { "rel": "System.LinkTypes.Hierarchy-Reverse", "url": "https://dev.azure.com/XXX/_apis/wit/workItems/XXX", "attributes": { "comment": "Making a new link for the wit " } } }You can also refer to thiscasefor help .
I am trying to create a build pipeline that creates a parent Feature in Azure DevOps along with several child User Stories that are linked to the parent Feature.I have used an Azure DevOps extension for creating Work Items and I have successfully created a parent Feature via Build Pipeline - but I cannot figure out how to add more tasks in the build to create the child User Stories that are linked to the parent Feature.N/AExpected result is build completes successfully and parent Feature / linked child User Stories are created without issue.
Creating Linked Child Item in Azure DevOps via Build Pipeline
Where are you extracting the data from? If its a database, it is easy because you can add it in the sql statement used when selecting data. For example:select *, NewColumn='Value' from yourTableIf you want a solution for every data source possible, you can use the derived column transformation in data flow:https://learn.microsoft.com/en-us/azure/data-factory/data-flow-derived-columnAlso you can add data from the pipeline itself using string functions, for example:@concat('select *, pipeId= ''', pipeline().RunId,''' from SalesLT.Address')This will select all the fields, and an additional field called pipeId which will have the same value for every row, and will be the pipeline run id.Hope this helped!!
I am copying data from system X to a blob storage as parquet file or excel file is it possible to add one more step that can help me add one more column with the pipeline run ID or Trigger ID?Thank you in Advance
Is it possible to add a column with specific value to data in azure data factory pipeline
As stated previously, the fit_transform method doesn't pass y off to transform. What I've done previously is implement my own fit_transform. Not your code, but here's an example I wrote recently:class MultiColumnLabelEncoder: def __init__(self, *args, **kwargs): self.encoder = StandardLabelEncoder(*args, **kwargs) def fit(self, X, y=None): return self def transform(self,X): data = X.copy() for i in range(data.shape[1]): data[:, i] = LabelEncoder().fit_transform(data[:, i]) return data def fit_transform(self, X, y=None): return self.fit(X, y).transform(X)There are other ways. You could have y as a class param and access it in the transform method.Edit: I should note that you can pass y off to your version of transform. So:def fit_transform(self, X, y=None): return self.fit(X, y).transform(X, y)
I tried to transform the column 'X' using values in column 'y' (this is a toy example, just to show usingyfor transformation) before fitted by the last linear regression estimator. But whydf['y']is not passed toMyTransformer?from sklearn.base import TransformerMixin class MyTransformer(TransformerMixin): def __init__(self): pass def fit(self, X, y=None): return self def transform(self, X, y=None): print(y) return X + np.sum(y) df = pd.DataFrame(np.array([[2, 3], [1, 5], [1, 1], [5, 6], [1, 2]]), columns=['X', 'y']) pip = Pipeline([('my_transformer', MyTransformer()), ('sqrt', FunctionTransformer(np.sqrt, validate=False)), ('lr', LinearRegression())]) pip.fit(df[['X']], df['y'])Running this script will raise an error at linereturn X + np.sum(y), looks like y isNone.
Use pipeline with custom transformer in Scikit Learn
You can click on the build record of the CI trigger in the build pipeline. The username of the changer and commit will be displayed at the top of the record. Click commit and you will be taken to the changed file.In the sample ,I use the github repository for code ,I think the same is true with gitlab.
I am using an external private gitlab repository for code and usingazure-devopsfor cicd pipeline. When the user checks in code to the gitlab project repositoryazure-devopsbuilds and releases. It names the build as CI BUILD. How can I get username and current changeset (name of the user who just checked in his code and his changeset files name) to show with build inazure-devops?
Azure devops cicd gitlab integration
The first code snippet to create thetest_cvvectorizer is overcomplicated and unnecessary. It may also give different vectors that you expect. Let us first simplify that and the pipeline part will become easier. You created thecvobject and used itsfit_transformfunction. This function fits the vectorizer to the data. in this case, it means that it learns the vocabulary of the data and stores it internally in thevocabularyinstance variable. It then transforms the input data according to the vocabulary it learned while fitting.If you now simply called thetransformfunction on the same object, it would use the stored vocabulary to transform the new data as well. You do not need to create a new objecttest_cvand pass the old vocabulary to it.cv = CountVectorizer() X = cv.fit_transform(corpus).toarray() test_X= cv.transform(test_corpus).toarray()Now that this is simplified, the pipeline part becomes easier to understand. Your pipeline code already works in its current state (provided the inputs are correct). You can now callprocessor.fit(corpus)if you want the vectorizer to learn the vocabulary. You can then callprocessor.transform(test_corpus)to apply the vectorizer to the test_corpus.
usually when I useCountVectorizeronly, I can has the vocabulary that I can us it as parameter for the new object ofCountVectorizerto prossessing new data before the predictcv = CountVectorizer() X = cv.fit_transform(corpus).toarray() cv_dict = cv.vocabulary_ test_cv = CountVectorizer(vocabulary = cv_dict) test_X= test_cv.fit_transform(test_corpus).toarray()I want to know how can do the same thing using pipeline? I write this code to starttext_features = dataset['corpus'] text_transformer = Pipeline( steps=[ ('count', CountVectorizer()), ] ) preprocessor = ColumnTransformer( transformers=[ ('text', text_transformer, text_features[0]) ] )
how to get vocabulary of CountVectorizer using pipeline
You are trying to use the dataframe in piping which is not yet created hence you are getting error. We need to break the code in two parts so that first part of the code will create dataframe and second part of the code will use the dataframe created by first part.M_Summer <- combination %>% filter(Sex == "Men", Season == "Summer") ####Break the code M_Summer <-M_Summer %>% mutate(Num_Sports = length(unique(M_Summer$Sport))) %>% select(-c(Sport))Request you to provide head(df) of your data or reproducible example for more accurate code. Do let me know in case you have any queries.
I'm working on a dataset (olympics) and I would like to create a sub_dataset with specific conditions. To do that I'm using the dplyr library and the code works. The problem is that if I change the code using%>%to make it more readable it doesn't work anymore. I've pasted the code below:combination <- select(olympics, Sex, Season, Sport) M_Summer <- combination %>% filter(Sex == "M", Season == "Summer") %>% mutate(Num_Sports = length(unique(M_Summer[["Sport"]]))) %>% select(-c(Sport))If I run the code above, R shows this error message:Error in mutate_impl(.data, dots): Evaluation error: object 'M_Summer' not found.Thanks for the help!
Error in running code with R pipeline using dplyr library
You can either use:theJENKINS EnvInject Plugin, which beingsInject passwords to the build as environment variablesor theJENKINS Mask Passwords PluginWith pipelines, you would usean array of secrets.
I am having a jenkins pipeline job which has username and password as string and password parameters. i want to get the values from the parameter and mask the values so that i can use it pipeline for accessing my tool. can someone help on this please
How do i mask the jenkins pasword in pipeline
Got it. The error was due to an unmentioned process. TheUploadedFileclass in Laravel was interpreting the mimetype of the file asapplication/x-gzip, with an empty extension, so the resulting file was saved as[hashed_file_name].instead of[hashed_file_name].tar.gz. Then (on another server) I was usingGuzzleHttpto get the file, andSymfonyto guess the extension.$extension = ExtensionGuesser::getInstance()->guess($contentType);Because of the mimetype, the rebuilt file, usingContent-Typeheader to get the extension, was simply.gzinstead of.tar.gzor.tgz. Changes to my upload script fixed it.$alias = $file->getClientOriginalName(); $mimetype = $file->getMimeType(); $extension = $file->guessClientExtension() ?: pathinfo($alias, PATHINFO_EXTENSION); if ( ends_with($mimetype, 'x-gzip') && ends_with($alias, ['tar.gz', 'tgz']) ) { $mimetype = 'application/tar+gzip'; $extension = 'tar.gz'; } $hash = $file->hashName(); if ( ends_with($hash, '.') ) { $hash .= $extension; } $path = $file->storeAs($storage, $hash);
I'm trying to extract a .tar.gz that I compress in a pipeline with bash. The pipeline selects files that should be packaged in an update usingrsyncand then compresses them withtar:rsync -azp --files-from=${RSYNC_UPDATE_FILE} --ignore-missing-args src update tar czf ${UFILE} updateThe files look correct when I open the .tar.gz with a program like WinRar. I then extract the update using PHP in an app.# Get the full path where it should be extracted $dirpath = $dirpath ?: File::dirname($zippath); $phar = new \PharData($zippath); # Check if it's compressed: e.g. tar.gz => tar $zip = $phar->isCompressed() ? $phar->decompress() : $phar; try { # Extract it to the new dir $extracted = $zip->extractTo($dirpath); } catch (\Exception $e ) { throw new CorruptedZip("Unable to open the archive.",424,$e); }The extracted files have the right permissions, dir structure, etc. but I guess they are still compressed. The files all contain many groups of strings, not PHP code.02a0 048b 2235 bca8 ad5e 4f7e d9be ed1f 5b00 24d5 9248 8994 2c75 f778 e293 74db 6401 a802 0af5 55e1 52fc fb37 80ff f99fCan anyone see where I'm missing a step?
Extracting .tar.gz with PHP after rsync
I really wouldn't recommend it, but there's a way via Jenkins Remote API -Jenkins input pipeline step filled via POST with CSRF - howto?curl -X POST -H "Jenkins-Crumb:${JENKINS_CRUMB}" -d json='{"parameter": {"name": "${PARAMETER_NAME}", "value": "${PARAMETER_VALUE}"}}' -d proceed='${SUBMIT_CAPTION}' 'http://j${JENKINS_URL}/job/${JOB_NAME}/${BUILD_ID}/input/${INPUT_ID}/submit'The question would be how would you run this? A new input in the upstream job? Run when?It might be more useful to divide the downstream job into two and run the actual deploy only when user accepts the input in the upstream job.
I have an upstream pipeline which is calling another downstream pipelinebuild job: "/org/projectA/master", parameters: [[$class: 'StringParameterValue', name: 'variable', value: 'value']], wait: trueIn my downstream pipeline, there is a step to ask for approveinput "Deploy to prod?"Currently the job is paused in the downstream pipeline waiting for approve, but in my main job (upstream pipeline), it is just waiting for sub pipeline to finish, doesn't show any message for approver. So is it possible to display the interactive input in my main pipeline? then the approver doesn't need to click to the sub pipeline to check the status.BTW, I cannot move theinputto main pipeline, cause there are other steps after it in the sub pipeline.Thanks in advance for any suggestion
How to control downstream pipeline's interactive input in upstream pipeline
The problem was the subscription we were using has a firewall in place. So there was no way to get into firewall by even doing things like adding "WEBSITE_WEBDEPLOY_USE_SCM" = false in the app setting of web service.The only solution we could think of was to create an agent inside a VM sitting inside the same firewall.
I am using VSTS pipeline to deploy asp.net core 2.2 MVC to azure web app. The last step which is deploy to azure fails - please see the error below:##[error]Failed to deploy web package to App Service. 2019-01-08T21:39:47.3810424Z ##[error]Error Code: ERROR_DESTINATION_NOT_REACHABLE More Information: Could not connect to the remote computer ("our website url"). On the remote computer, make sure that Web Deploy is installed and that the required process ("Web Management Service") is started. Learn more at:http://go.microsoft.com/fwlink/?LinkId=221672#ERROR_DESTINATION_NOT_REACHABLE. Error: Unable to connect to the remote server Error: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond ????:443I even tried to Take App Offline but no luck:However, it works fine if I publish the web app through Visual studio.
how to fix ERROR_DESTINATION_NOT_REACHABLE error in azure web app deployment using VSTS
You should be able to usezipfileto build the file however you'd like.class MyTask(luigi.Task): def output(self): date_path = self.search['date_path'] zip_fn = "data/%s/%s.zip" % (date_path, date_path) return luigi.LocalTarget(zip_fn) def run(self): ztemp = tempfile.NamedTemporaryFile(mode='wb') z = zipfile.ZipFile(ztemp, 'w') # build the zip file z.close() os.rename(ztemp.name, self.output().path)Fromthe docs on FileSystemTarget,
It is ok tooutput()zip files like this:def output(self): date_path = self.search['date_path'] zip_fn = "data/%s/%s.zip" % (date_path, date_path) return luigi.LocalTarget(zip_fn)But how to pas this zip inrun()method?class ZeroTask(luigi.Task): path_in = luigi.Parameter() textfiles = [] path_to_zip = '' def requires(self): return [] def run(self): # Get a bunch of text files # Do some manipulations with textfiles # Create a result.zip # self.path_to_zip = '~/Project/result.zip' def output(self): zip_fn = self.path_to_result.zip return luigi.LocalTarget(zip_fn)What should I do in therun()method?
How to output a zipfile in Luigi Task?
This worked for me. I think your cmdkey has a typo.string tsScript = $"mstsc /v:{machinename}"; string cmdKey = $"cmdkey /generic:{machinename} /user:{username} /pass:{password}"; using (Runspace rs = RunspaceFactory.CreateRunspace()) { rs.Open(); using (Pipeline pl = rs.CreatePipeline()) { pl.Commands.AddScript(cmdKey); pl.Commands.AddScript(tsScript); pl.Invoke(); } }
I'm trying to use a script like this:$Server="remotepc" $User="user" $Password="password" cmdkey /generic:$Server /user:$User /pass:$Password mstsc /v:$Server /consolewhich works fine when running in powershell.I'm trying to get this using runspace and pipeline in c#.So this code works:string server = "server"; string mstscScript = "mstsc /v:"+server; Runspace runspace = RunspaceFactory.CreateRunspace(); runspace.Open(); Pipeline pipeline = runspace.CreatePipeline(); pipeline.Commands.AddScript(mstscScript); pipeline.Invoke(); runspace.Close();However, if I add the script with the username and password it stops working and freezes.So this code does not work.string username = "user"; string password = "password"; string server = "server"; string cmdScript="cmd/genaric:"+server+" /user:$" + username" + /pass:$" + password; string mstscScript = "mstsc /v:" + server; Runspace runspace = RunspaceFactory.CreateRunspace(); runspace.Open(); Pipeline pipeline = runspace.CreatePipeline(); pipeline.Commands.AddScript(cmdScript); pipeline.Commands.AddScript(mstscScript); pipeline.Invoke(); runspace.Close();
MSTSC into remote desktop with credentials with powershell using c#
You can just use env variable at first (to set the condition):https://jenkins.io/doc/book/pipeline/jenkinsfile/#setting-environment-variables-dynamically.then you can use the environmentwhen:https://jenkins.io/doc/book/pipeline/syntax/#built-in-conditionsExecute the stage when the specified environment variable is set to the given value, for example:when { environment name: 'DEPLOY_TO', value: 'production' }
Currently I have a Jenkinsfile that looks like the following :stage('A'){ agent{docker{image A}} when{branch = 'master'} step{do blah} } stage('B'){ when{branch = 'master'} agent{docker{image B}} step{do blah} }Concern is that this condition is duplicated all over the place and I'd like to extract it - but fail to identify the correct syntax if any.
How can I avoid duplication of conditions that apply to multiple steps in my Jenkinsfile pipeline?
Pipeline can execute multiple commands like if using the pipe operator|.It is enough to add multiple commands to the pipeline in theCommandsproperty e.g.pipeline.Commands.Add(myCmd).Add(myPipedCmd);For e.g. to runGet-Item | Select NameRunspace runspace = RunspaceFactory.CreateRunspace(); runspace.Open(); Pipeline pipeline = runspace.CreatePipeline(); Command dir = new Command("Get-Item"); pipeline.Commands.Add(dir); Command select = new Command("Select"); select.Parameters.Add(null, "Name"); pipeline.Commands.Add(select); var out1 = pipeline.Invoke(); // ... runspace.Close();
DoesPipeline.InvokeinSystem.Management.Automation.Runspacescall the cmdlets present in the Commands collection in pipeline or executes individually in C#?I have added two cmdlets to the pipeline and calledPipeline.Invoke, however the output of first cmdlets is not recognized as pipeline input and am getting error related to the missing mandatory field for second cmdlet.Can we achieve the pipeline calling in C# like we do it in PowerShellconsole A | B?
Invoke powershell cmdlets in pipeline in C#
Missing scheme in request urlalways means that your URL is not valid, its missinghttp://andhttps://So prependhttps://orhttp://before the image url you have`https://` + response.css("div.productimage img::attr(src)").extract_first()
I got an value error:raise ValueError('Missing scheme in request url: %s' % self._url) ValueError: Missing scheme in request url: hMy items.py code is:class Brand(scrapy.Item): name = scrapy.Field() url = scrapy.Field() brand_image = scrapy.Field() image_urls = scrapy.Field() images = scrapy.Field()My setting.py is:BOT_NAME = 'scraper' SPIDER_MODULES = ['scraper.spiders'] NEWSPIDER_MODULE = 'scraper.spiders' ITEM_PIPELINES = {'scrapy.contrib.pipeline.images.ImagesPipeline': 1} IMAGES_STORE = 'images'My spider code:import scrapy import json from scraper.items import Brand class QuotesSpider(scrapy.Spider): name = "brandDetails" allowed_domains = ["ozhat-turkiye.com"] with open('brands.json') as data_file: data_item = json.load(data_file) start_urls = list() for item in data_item: start_urls.append(item["url"]) def parse(self, response): item = Brand() name = response.css("div.th::text").extract_first() name = name.replace('Products of ', '') item['name'] = name item['url'] = response.url urls = response.css("div.productimage img::attr(src)").extract_first() urls = response.urljoin(urls) item['image_urls'] = urls yield item
I am facing an value error while downloading / scrape images from scrapy spider i am using Image pipelines
If your destination is azure table storage, you could put your filename into partition key column. Otherwise, I think there is no native way to do this with ADF. You may needcustom activityorstored procedure.
I am new to DF. i am loading bunch of csv files into a table and i would like to capture the name of the csv file as a new column in the destination table.Can someone please help how i can achieve this ? thanks in advance
Add file name as column in data factory pipeline destination
You can use thetokenizerattribute to tell theCountVectorizerto each list as a single document and turn thelowercaseoption toFalselike thistext_clf = Pipeline([('vect', CountVectorizer(tokenizer=lambda single_doc: single_doc,stop_words='english',lowercase=False)), ('tfidf', TfidfTransformer(use_idf=True)), ('clf', SGDClassifier(loss='hinge', penalty='l2', alpha=1e-3, random_state=42, verbose=1)),])
I am trying to create a classifier using Python and Sklearn. I currently have all my data imported successfully. I have been trying to follow a tutorial fromhere, changing it a bit as I go. Later into the project I realized that their training and testing data was much different then mine. If I understand it right they had something like this:X_train = ['Article or News article here', 'Anther News Article or Article here', ...] y_train = ['Article Type', 'Article Type', ...] #Same for the X_test and y_testWhile I had something like this:X_train = [['Dylan went in the house. Robert left the house', 'Where is Dylan?'], ['Mary ate the apple. Tom ate the cake', 'Who ate the cake?'], ...] y_train = ['In the house.', 'Tom ate the cake'] #Same for the X_test and y_testWhen I tried to train the classifier with there pipeline:text_clf = Pipeline([('vect', CountVectorizer(stop_words='english')), ('tfidf', TfidfTransformer(use_idf=True)), ('clf', SGDClassifier(loss='hinge', penalty='l2', alpha=1e-3, random_state=42, verbose=1)),])I get the error:AttributeError: 'list' object has no attribute 'lower'At this line:text_clf.fit(X_train, y_train)After doing research I now know that is because I am inputting a array for myX_traindata instead of a string. So my question is, how do I construct a pipeline that will accept arrays for myX_traindata and a string for myy_traindata? Is this possible to do with a pipeline?
Python Sklearn Pipeline with array
As mklement0 pointed out in comments. We can access PSObject internal BaseObject and there every thing would be contained.
I am having strange problem regarding usage ofDataTablethat is set fromPowershellScriptCommandlettoC#. I don't know why a properly constructedDataTableis being pushed as series ofDataTableRowas data rather than fully packedDataTable. This is blocking me to process anything further. Below is C#commandlet.[Cmdlet(VerbsCommon.Set, "LinkParameter")] public class SetLinkParameter : PSCmdlet { [Parameter(Position = 0)] public string ParameterName { get; set; } [Parameter(Position = 1, Mandatory = true, ValueFromPipeline = false)] public object ParameterValue { get; set; } protected override void ProcessRecord() { base.ProcessRecord(); var setParameterValue = ParameterValue // This never comes in proper DataTable format rather than appears as System.Data.DataRow System.Data.DataRow System.Data.DataRow ............. ............ }AboveSystem.Data.Rowwould be equal to the amount of data rows contained in data table. I can't placeDataTableas Parameter type because I have to set all standard data type values along withDataTableas well.Below isPowershellscript.$table = New-Object system.Data.DataTable $col1 = $table.Columns.Add( "Col1", [string]) $col2 = $table.Columns.Add( "Col2", [int]) $table.Rows.Add( "bar", 0 ) $table.Rows.Add( "beer", 1 ) $table.Rows.Add( "baz", 2 ) Set-LinkParameter "CreatedData" $table Set-LinkParameter "SetTestMessage" "testMessage" // can also be set.How to resolve this issue?
Powershell C# Commandlet not able to get DataTable when set from script side
pipe(|) is made to deal with one command's output to be send as input to another command for example:echo "hello there"will simply printhello therebut when put a|and append asedcode after it eg-->echo "hello there" | sed "s/hello/hi/"then it will printhi therebecause echo's output works as standard input forsedcommand.So in your 1st case:sleep 3 | echo "Hello world."sleep don't send any output toechoand it only runs a sleep process in back end and standard output from echo command shows usHello worldon screen.But in 2nd case: whenechois sending standard output to sleep command as standard input I believe it doesn't take it since sleep only takes parameters(digit values) to let it know how much time it needs to put sleep process so it and moreover it is NOT suppose to show any standard output on screen so that print doesn't happen there.
So I am readingthisarticle on how shell commands can be really fast and faster than a hadoop cluster for processing large amounts of data.I read thatin a data pipeline, shell commands are processed in parallel.I triedsleep 3 | echo "Hello world."It prints Hello World and goes into sleep mode, which exits after 3sBut when i did,echo "Hello World" | sleep 2It just went into sleep. Hello World was not printedCan anyone explain it to me why this happens? If the commands are executed in parallel, shouldn't Hello World get printed either way?
parallelism in shell data pipeline
There isdynamickeyword. It can be used like this:rule all: input: dynamic('{id}.png') rule draw: input: '{id}.txt' output: '{id}.png' shell: 'cp {input} {output}' rule cluster: input: 'input.csv' output: dynamic('{id}.txt') shell: 'touch 1.txt 2.txt'
I have a script which takes a large input file then breaks this down into a number of chunks from 1 to n using an unpredictable algorithm.Then a following script will process each of these chunks iteratively.How can I create a snakemake rule which essentially states that the output files will exist from 1 to n, and the following script should be run once for each of the 1 to n input files.Thanks!
Snakemake wildcard in output only
Put the output in aProcess {}block:function Out() { [CmdletBinding()] Param( [Parameter(Mandatory=$false, ValueFromPipeline=$true, Position=0)] $Out ) Process { $Out } }That block is invoked for each object received from the pipeline.From thedocumentation:ProcessThis block is used to provide record-by-record processing for the function. This block might be used any number of times, or not at all, depending on the input to the function. For example, if the function is the first command in the pipeline, the Process block will be used one time. If the function is not the first command in the pipeline, theProcessblock is used one time for every input that the function receives from the pipeline. If there is no pipeline input, theProcessblock is not used.This block must be defined if a function parameter is set to accept pipeline input. If this block is not defined and the parameter accepts input from the pipeline, the function will miss the values that are passed to the function through the pipeline.
I have a PowerShell function (Out()). When I want to get a result, it takes the last object from the pipline. For example: I want to show all objects in (gps):function Out() { [CmdletBinding()] [Alias()] Param( [Parameter(Mandatory=$false, ValueFromPipeline=$true, Position=0)] $Out ) $Out }Result:PS C:\> gps | Out Handles NPM(K) PM(K) WS(K) CPU(s) Id SI ProcessName ------- ------ ----- ----- ------ -- -- ----------- 505 27 7796 6220 0.78 13160 0 wmpnetwk
Show Last Object From Pipline
This has changed over various versions but the most current information is that the binary releases under Linux and GPU are built with Nvidia's NCCL 2 library (since version 2.3) and under Windows they are built with MS-MPI. Other versions are built with OpenMPI.The page you are linking to has a lot of information especially in the sections aboutthe necessary changes to your Python script(they are minimal) andhow to invoke it.
I'm new in Microsoft CNTKGoogle Tensorflow used grpc for communicating with each machine , but i don't know which pipeline used for distributed System in CNTK, Can you let me know?and Could you give me some reference or site about example of multi machines , multi GPU? i already have been to thissite, but i cannot find multi machine information or code.THANK YOU! :)
which pipeline used for distributed System in CNTK?
I created another answer because I cant format the code in a comment.If you can get the repository to a folder, you can iterate over a folder and add everything inside with the following code:$files = Get-ChildItem 'C:\PathToFiles' $files | ForEach-Object($_){ Set-AzureRmDataFactoryV2Pipeline -DataFactoryName "your df name" -ResourceGroupName "your RG name" -Name "pipelineName" -DefinitionFile $_.Name }Hope this helped!
Is there any ways to paste Json code to pipeline code in azure data factory version 2? I am able to copy from pipeline code but no ways to paste there.
Data factory version 2
MEM (memory) and IF (instruction fetch) are the same hardware element?No, there are not, because a) why would they then be drawn as separate blocks, and b)codeloads (== fetches) are not the same asdataloads. Code fetches are used to understand what a new instruction wants to do with data — the function, and loads/stores are acts of obtaining arguments of that function.If are the same memory then 2 instructions can't load or store in the same cycle clock, I'm right?Both load and store are done inside MEM, not IF, stage. Because there is only one MEM block on the diagram, at most one memory-related operation can be done at each clock. This does not mean that the IF stage is necessarily blocked by MEM. Whether instruction/data memories are separate, or there is an instruction cache, would define, but it is outside the scope of the diagram you showed.
In PIPELINE, MEM (memory) and IF (instruction fetch) are the same hardware element?If are the same memory then 2 instructions can't load or store in the same cycle clock, I'm right?MIPS processor diagram
PIPELINE - mem(memory) and if(instruction fetch)
The documentation is misleading. Locking state doesn't actually locks states ... It prevents state changes notifications from child <-> parent.
Dear GStreamer community. I am struggling, trying to desynchronise parts of my pipeline.I am trying to prevent an element from propagating its states change to parent. I know there is a gst_element_set_locked_state which could help, but the issue is that I need to have my component able to handle its own states change (I don't trigger them manually).The idea would be to unlock -> gst_element_set_state -> lock everytime this is needed, but unfortunatly, the set_state is going to the parent bin.How should I handle this ? Thanks in advance for your help !Alann
How to prevent states from propagating?
You can do it by posting to agitlab triggerAs they say, you can do it with minimal effort:First, create the Trigger inSettings ➔ CI/CD under Triggers. Add triggerSecond, call your trigger withcurlcurl --request POST \ --form token=TOKEN \ --form ref=master \ https://gitlab.example.com/api/v4/projects/9/trigger/pipelineHope it helps
I´m trying to run agitlab pipelinefrom command line. I am learning configuring various options provided by gitlabhttps://docs.gitlab.com/ee/ci/yaml/I know I can just specify the branch name and then pushing to the repository. Then, when I push, gitlab executes the jobs defined in the yaml file, but:How can I simulate the gitlab-ci.yaml file function so I do not have to push to test it? Is it possible to run a terminal/console command to run the pipeline?
How to run gitlab yaml project configuration from terminal?
The simple way is here useXMLMethod to convert your Pipe lined string data with row :DECLARE @DATA NVARCHAR(MAX); SET @DATA = '1|Content|2017-02-11|Guest|Gold|||||1903'; SELECT ROW_NUMBER() OVER(ORDER BY (SELECT 1)) RN, split.a.value('.', 'VARCHAR(MAX)') [Items] FROM ( SELECT CAST('<X>'+REPLACE(@DATA, '|', '</X><X>')+'</X>' AS XML) AS String ) A CROSS APPLY String.nodes('/X') AS split(a);Result :RN Items 1 1 2 Content 3 2017-02-11 4 Guest 5 Gold 6 7 8 9 10 1903
I have a Pipe lined String like below:'1|Content|2017-02-11|Guest|Gold|||||1903'I want to split the String value into Rows. I found many suggestions by surfing on the net. Most people suggest the functiondbo.Split.When I try to split my string using that function:SELECT ROW_NUMBER() OVER(ORDER BY(SELECT NULL))RN, Items FROM dbo.Split('1|Content|2017-02-11|Guest|Gold|||||1903','|')It gives the result like below:RN Items 1 1 2 Content 3 2017-02-11 4 Guest 5 Gold 6 1903It skips all the empty values and give only the value holding rows. but in my case if any values where empty, then I want it like below:RN Items 1 1 2 Content 3 2017-02-11 4 Guest 5 Gold 6 7 8 9 10 1903Which means, I want the empty vales as ' '. I tried and I can't get it. Please help me to get this. Thanks.
Split pipe lined string into Rows - Split the empty values also in SQL Server
)Using the built-inBODdata frame as an example the easiest is to form the zoo object usingread.zoolike this:library(dplyr) # library(magrittr) would also work for this example library(ggplot2) library(zoo) BOD %>% read.zoo() %>% autoplot()2)however, if you really wanted to use the zoo constructor then this works (with the samelibrarystatements):BOD %>% { zoo(.[[[2]], .[[1]]) } %>% autoplot()IfBODhas more than 2 columns then use .[-1] as the first argument.3)This also works.BOD %>% { zoo(.$demand, .$Time) } %>% autoplot4)This also works:library(magrittr) # must use magrittr BOD %$% # note that this is a different pipe operator zoo(demand, Time) %>% autoplot()
I'm trying to do a pipeline like:df <- df %>% .....some functions pass df as first parameter.... zoo(???) %>% .....some functions pass df as first parameter....Because at the step zoo(), it requires df[, some_columns] as first parameter, and df$a_index as second column, how I can write into this pipeline? If I don't want to break the pipeline into:df <- df %>% .... df <- zoo(df[, some_columns], df$a_index) df <- df %>% ....
In r how to make a pipeline with 2 parts of a data frame as parameters of a function?
Full-text search in Stitch pipeline aggregate actions is currently unsupported in the Beta version of the Stitch product but we hope to support them when Stitch reaches GA (General Availability).
I am trying to run a query using text operator by the new MongoDB feature stitch. I have already tried a few options, but the call responds with the following message:unknown operator: $searchHow can I resolve this error?I also have the text index created.{ "v" : 2, "key" : { "_fts" : "text", "_ftsx" : 1 }, "name" : "script_text_description_text", "ns" : "test.scripts", "weights" : { "description" : 1, "script" : 1 }, "default_language" : "english", "language_override" : "language", "textIndexVersion" : 3 }Attempt #1:client.executePipeline([{ "service": "mongodb-atlas", "action": "aggregate", "args": { "database": "test", "collection": "scripts", "pipeline": [{ $match: { $text: { $search: "docker" } } } ] } }])Attempt #2:db.collection('scripts').find({"$text":{"$search":'docker'}})Attempt #3:db.collection('scripts').aggregate([{ "$match": { "$text": { "$search": "docker" } } }])UPDATE:I applied this work around.import { StitchClientFactory,BSON } from 'mongodb-stitch'; let bsonRegex = new BSON.BSONRegExp(search, 'i') // showLoading(); db.collection('clients').find({owner_id: client.authedId(),name:bsonRegex}).execute().then(docs => { funct(docs); // hideLoading(); });
unknown operator: $search pipeline mongodb stitch
Usecut -f 2 practice.sam | sort -o | uniq -cIn your original code, you're redirecting the output ofcuttofield2.txtand at the same time, trying to pipe the output intosort. That won't work (unless you usetee). Either separate the commands as individual commands (e.g., use;) or don't redirect the output to a file.Ditto the second half, where you write the output tosortedfield2.txtand thus end up with nothing going tostdout, and nothing being piped intouniq.So an alternative could be:cut -f 2 practice.sam > field2.txt ; sort -o field2.txt sortedfield2.txt ; uniq -c sortedfield2.txtwhich is the same ascut -f 2 practice.sam > field2.txt sort -o field2.txt sortedfield2.txt uniq -c sortedfield2.txt
Trying to get a certain field from a sam file, sort it and then find the number of unique numbers in the file. I have been trying:cut -f 2 practice.sam > field2.txt | sort -o field2.txt sortedfield2.txt | uniq -c sortedfield2.txtThe cut is working to pull out the numbers from field two, however when trying to sort the numbers into a new file or the same file I am just getting a blank. I have tried breaking the pipeline into sections but still getting the same error. I am meant to use those three functions to achieve the output count.
Pipelining cut sort uniq
The difference between the two plots is that in the lineplt.plot(X[:,np.newaxis],y_pred,color="red")The values inX[:,np.newaxis]are not sorted, while inplt.plot(xfit[:,np.newaxis],y_pred,color="red")the values ofxfit[:,np.newaxis]are sorted.Now,plt.plotconnects any two consecutive values in the array by line, and since they are not sorted you get this bunch of lines in your first figure.Replaceplt.plot(X[:,np.newaxis],y_pred,color="red")withplt.scatter(X[:,np.newaxis],y_pred,color="red")and you'll get this nice looking figure:
with the following code I just want to fit a regression curve to sample data which is not working as expected.X = 10*np.random.rand(100) y= 2*X**2+3*X-5+3*np.random.rand(100) xfit=np.linspace(0,10,100) poly_model=make_pipeline(PolynomialFeatures(2),LinearRegression()) poly_model.fit(X[:,np.newaxis],y) y_pred=poly_model.predict(X[:,np.newaxis]) plt.scatter(X,y) plt.plot(X[:,np.newaxis],y_pred,color="red") plt.show()Shouldnt't there be a curve which is perfectly fitting to the data points? Because the training data (X[:,np.newaxis]) and the data which get used to predict y_pred are the same (also (X[:,np.newaxis]).If I instead use the xfit data to predict the model the result is as desired...... y_pred=poly_model.predict(xfit[:,np.newaxis]) plt.scatter(X,y) plt.plot(xfit[:,np.newaxis],y_pred,color="red") plt.show()So whats the issue and the explanation for such a behaviour?
Pipeline with PolynomialFeatures and LinearRegression - unexpected result
It seems as though what you are trying to do is create your own buildpack (by cloning the Django one and editing it).Bluemix supports 3rd party buildpacks from any public git repo so your best bet is to do the following:Fork the django buildpack and make the required edits for your appPut your application in it's own repoPoint the pipeline at this repo and configure your build/test/deploy stagesConfigure your "deploy" stage to use your newly modified buildpack by either including abuildpack line in your manifest.ymlor modifying the deploy script tocf push -b http://yourbuildpackurl.git "${CF_APP}"
I want to use the IBM Bluemix DevOps Services, and more especially the automated pipeline to pass the last pushed commit through the build, the tests, then deploy in a test environment.All the guides I found recommend having one repo with the server and application together, and link this repo to the pipeline. While such a configuration works, I feel like it is against the Django standards. The application (what I develop) should be separated (ie: on another git repo) from the server (which is just a part to make the application work).I do not know how to manage this situation. Should I:Write a build script which usesgit cloneto retrieve a build-pack likehttps://github.com/fe01134/djangobluemixthen modify the adequate files ;Find a way to attach two git repositories to one pipeline ;Forget the idea and go for the IBM recommended way to have the server and the application on the same repo?
Server and application on two different git repositories
The module code has changed fromapache_beam.utilstoapache_beam.option:You should now use:from apache_beam.options.pipeline_options import PipelineOptions from apache_beam.options.pipeline_options import SetupOptions from apache_beam.options.pipeline_options import GoogleCloudOptions from apache_beam.options.pipeline_options import StandardOptionsOfficial documentation here :https://beam.apache.org/releases/pydoc/2.0.0/_modules/apache_beam/options/pipeline_options.html
I want to create very simple pipeline and already get stuck at the beginning. Here comes my code:import apache_beam as beam options = PipelineOptions() google_cloud_options = options.view_as(GoogleCloudOptions) google_cloud_options.project = 'myproject' google_cloud_options.job_name = 'mypipe' google_cloud_options.staging_location = 'gs://mybucket/staging' google_cloud_options.temp_location = 'gs://mybucket/temp' options.view_as(StandardOptions).runner = 'DataflowRunner'Produces the Error:NameError: name 'PipelineOptions' is not defined
beam dataflow python name 'PipelineOptions' is not defined
We are just getting started with our chaos engineering efforts, but I'll offer some thoughts regarding your questions.There are at least three distinct classes of experiment:Instance/container kills that we expect the underlying infrastructure to handle automatically.Higher-level but fairly localized failures like slow or unavailable dependencies.Large-scale failures like data center or region down.For a build pipeline the sweet spot would be in the middle there (i.e. higher-level but localized failures), because usually the software itself plays a role in responding to the failure. For example the software might include a circuit breaker that trips, throttling, automated failover, etc. If those are software functions, then they can either work or not work, and the build should uncover that.To the extent that resiliency to failure is a system requirement, then yeah, a failed experiment would fail the pipeline. Suppose for instance that build 392 has a correctly working circuit breaker, and that build 393 doesn't. That would be a failure since the build goes from meeting the requirement to not.
Chaos engineeringpractices are becoming very widely used. One common example is Netflix' ownChaos Monkey. However, Chaos Monkey is often run ad-hoc against random targets. I'm curious how chaos experiments might work in a typicalCI/CD pipelineto enhance a specific service's resiliency.Since chaos experiments (usually) require a fully functional environment, when would they run? Would it run parallel to testing, or downstream?Would you run a chaos experiment with every commit, or just some?How long would allow the chaos experiments to run? A 60 minute CPU spike might interfere with a "fail fast" approach, for example.Would a chaos experiment ever fail the pipeline? What would constitute a 'failure'?
How might Chaos Engineering look as part of a pipeline?
With git 2.12.2, I'm entirely unable to reproduce the behavior (re: notes not being printed) described in the question.That said, the following performs the requested operation anddoes notcreate a temporary file (on any system where bash could detect/dev/fdor/proc/self/fdsupport at compile time), and generates one-line output with notes appended to each line having them:#!/bin/bash in_note=0 notes= last_line= while IFS= read -r line; do if (( in_note == 0 )) && [[ $line = "Notes:" ]]; then ## at the start of a note in_note=1; continue fi if (( in_note == 0 )); then ## outside any note [[ $last_line ]] && printf '%s\n' "$last_line" last_line=$line continue fi if [[ $line = "" ]]; then ## at the end of a note in_note=0 printf '%s|%s\n' "$last_line" "$notes" last_line= continue fi # all notes are prefixed by four spaces, so the below doesn't need extra spacing notes+="$line" ## inside of a note done < <(git log --oneline --notes) [[ $last_line ]] && printf '%s\n' "$last_line"
I can't seem to getgit logto produce notes for machine-consumption.git logwill openlessas a pager and notes are shown.git --no-pager log [--notes|--show-notes]wont show notes.git --no-pager log --notes | lesswill show notes.git --no-pager log --notes | less | catwont.git log --notes > gitlog.txtworks, but I'm trying to avoid managing files.cat <(git log --notes)won't show but is using a temporary fileless -f <(git log --notes --oneline)will show.git log 1>&2 | cat 2>&1 | catjust opens less still.git log 2>&1 | catdoesn't workgit log 2>&1 | cat 1>&2 | catdoesn't workHelp I'm so confused, what black magic is causing part of the data I want to just be dropped but apparently only at display time?P.S. if your upset with all the useless-use-of-cats, imagine a perl/sed/grep/awk filter, ultimately I'm trying to strip some newlines so the note's value appends the lines of thegit log --onelineformat.
dump "git log --notes" to shell pipe?
Node.js readable streams have a.pipemethod which works pretty much like the unix pipe-operator, except that you can stream js objects as well as just strings of some type.Here's a link to the doc on pipeAn example of the use in your case could be something like:const req = request(...); req.pipe(dst); req.pipe(hash);Note that you still have to handle errors per stream as they're not propagated and the destinations are not closed if the readable errors.
Suppose I have areadablestream, e.g.request(URL). And I want to write its response on the disk viafs.createWriteStream()and piping with the request. But at the same time I want to calculate a checksum of the downloading data viacrypto.createHash()stream.readable -+-> calc checksum | +-> write to diskAnd I want to do it on the fly, without buffering an entire response in memory.It seems that I can implement it using oldschoolon('data')hook. Pseudocode below:const hashStream = crypto.createHash('sha256'); hashStream.on('error', cleanup); const dst = fs.createWriteStream('...'); dst.on('error', cleanup); request(...).on('data', (chunk) => { hashStream.write(chunk); dst.write(chunk); }).on('end', () => { hashStream.end(); const checksum = hashStream.read(); if (checksum != '...') { cleanup(); } else { dst.end(); } }).on('error', cleanup); function cleanup() { /* cancel streams, erase file */ };But such approach looks pretty awkward. I tried to usestream.Transformorstream.Writableto implement something likeread | calc + echo | writebut I'm stuck with the implementation.
How to read from one stream and write to several at once?
No.Pipelines can only be specified on index and bulk (index operations).
I'm usingNESTto comunicate withElasticSearch. In an Index operation I can especify the PipeLine to execute:var insertDocument = client.Index<Document>(docInsert, s => s.Index(idxName) .Pipeline("attachments"));Is it possible to execute an Ingest PipeLine in an ElasticSearch Update?Thanks in advance,
Is it possible to execute an Ingest PipeLine in an ElasticSearch Update?
The error clearly says that the KNeighborsClassifier doesnt have transform method KNN has only fit method where as SVM has fit_transform() method. for the Pipeline we can pass n number of arguments in to it. but all the arguments should have transformer methods in it.Please refer the below linkhttp://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html
I am trying to build a GridSearchCV pipeline in sklearn for using KNeighborsClassifier and SVM. SO far, have tried the following code:from sklearn.model_selection import GridSearchCV from sklearn.pipeline import Pipeline from sklearn.neighbors import KNeighborsClassifier neigh = KNeighborsClassifier(n_neighbors=3) from sklearn import svm from sklearn.svm import SVC clf = SVC(kernel='linear') pipeline = Pipeline([ ('knn',neigh), ('sVM', clf)]) # Code breaks here weight_options = ['uniform','distance'] param_knn = {'weights':weight_options} param_svc = {'kernel':('linear', 'rbf'), 'C':[1,5,10]} grid = GridSearchCV(pipeline, param_knn, param_svc, cv=5, scoring='accuracy')but am getting the following error:TypeError: All intermediate steps should be transformers and implement fit and transform. 'KNeighborsClassifier(algorithm='auto', leaf_size=30, metric='minkowski', metric_params=None, n_jobs=1, n_neighbors=3, p=2, weights='uniform')' (type <class 'sklearn.neighbors.classification.KNeighborsClassifier'>) doesn'tCan anyone please help me with what am I going wrong, and how to correct it? I think there is something wrong with the last line as well, re params.
SKlearn pipeline using KNeighborsClassifier
jq -c '.[] | {a: .children[1].text, f: .children[0].text} | select(.a != null) | select(.f != null) | [.a, [.f,(.f | length)]]'
Is there a way I can concatenate these js's command in just one jq command?jq 'map({a: .children[1].text, f: .children[0].text})' | \ jq 'map(select(.a != null))' | \ jq 'map(select(.f != null))' | \ jq 'map([.a, [.f,(.f | length)]])' | \ jq -c '.[]'Thanks a lot.
JQ | concatenate commands
The inability to use wildcards for the Authorized Origin URIs is a bug that we are fixing, the fix should go out soon. If you need to work around this issue ASAP you can use one of our SDKs (like the Stormpath Node SDK) to programmatically add URLs the the application.
I created app with react and express (client-side rendering). I usereact-stormpath. When I addedhttps://my-app-name.herokuapp.comintoAuthorized Origin URIsin Stormpath Admin it works ok. But when I created pipeline with review apps, it created new subdomain with every request e.g.https://my-app-name-pr-1.herokuapp.comand there is problem with CORS, because this new url is not in Authorized Origin URIs. And I can't use wildcards like:https://*.herokuapp.com.Some hints how to manage it? Because I really don't want to add new Authorized Origin on every pull request.Really thanks!
Heroku pipelines with react stormpath
When you create a new job in a stage, you can write a script to be executed. In this script, you can callapt-get install,apt-get updateand so on.As example:#!/bin/bash # your script here sudo apt-get update sudo apt-get install jq jq --helpI've used the script approach to install tools likenvm(https://github.com/creationix/nvm) to use any Node.js version. You will need to reinstall the tools you want to use in any job that requires them.
I want to use a custom build tool (e.g. installed with brew install or some such) in a stage of my BlueMix DevOps Services pipeline. The doc says that each stage runs in a fresh container. How do I get my tools loaded into that container for use in my pipeline stage?
How do I use custom tools in a Bluemix DevOps pipeline stage
Not sure if this is a better way than your second code snippet, but a way to solve the first one is to use sub shell{ ... }right after the pipe:declare -A mapping seq 10 | { while read i; do key="key_$i" val="val_$i" echo "mapping[$key]=$val" mapping["${key}"]="${val}" done echo "${mapping["key_1"]}" echo "${mapping["key_2"]}" }
This question already has answers here:A variable modified inside a while loop is not remembered(8 answers)Closed7 years ago.I want to fill an associative array in bash in somewhat non-trivial setup. I have a pipeline of commands to produce the required input for the array.Here is a minimal/toy example:declare -A mapping seq 10 | while read i; do key="key_$i" val="val_$i" echo "mapping[$key]=$val" mapping["${key}"]="${val}" done echo "${mapping["key_1"]}" echo "${mapping["key_2"]}"In this examplemappingis changed insidewhile, but these changes do not propagate into the global namespace. I think this is becausewhileworks inside a separate subshell, thus namespaces have diverged.In order to avoid (what I suggest) the problem with subshells, I came up with the following:declare -A mapping while read i; do key="key_$i" val="val_$i" echo "mapping[$key]=$val" mapping["${key}"]="${val}" done < <(seq 10) echo "${mapping["key_1"]}" echo "${mapping["key_2"]}"Thus, the generation part explicitly goes into a subshell, while thewhileloop left at the top-level alone. The construction works.My questions are: is there any better way to accomplish my goal? And, is my suggestion about subshells correct? If so, why bash uses a subshell in the first case, but not in the second?EDIT: after little more digging, the question mostly is a duplicate ofthis one. A good list of options to handle the issue could be found athttp://mywiki.wooledge.org/BashFAQ/024
bash: accessing global variables from a command pipeline [duplicate]
+100filenames = ['./cs_disp_train.txt', './cs_limg_train.txt'] txt_queue = tf.train.string_input_producer(filenames) # txt_queue = tf.FIFOQueue(10, tf.string) # init_txt_queue = txt_queue.enqueue_many(filenames) enqueue_ops = [] image_queues = tf.FIFOQueue(100, tf.string) num_reader = len(filenames) for i in range(num_reader): reader = tf.TextLineReader() _, buffer = reader.read(txt_queue) enqueue_ops.append(image_queues.enqueue(buffer)) tf.train.queue_runner.add_queue_runner( tf.train.queue_runner.QueueRunner(image_queues, enqueue_ops)) y = image_queues.dequeue() sess = tf.Session() coord = tf.train.Coordinator() threads = tf.train.start_queue_runners(sess=sess, coord=coord) # sess.run(init_txt_queue) print sess.run([y]) print sess.run([y]) print sess.run([y]) print sess.run([y]) print sess.run([y]) print sess.run([y]) print sess.run([y]) print sess.run([y]) print sess.run([y]) print sess.run([y]) print sess.run([y]) print sess.run([y]) coord.request_stop() coord.join(threads)For example, I have two files, 'cs_disp_train.txt' and 'cs_limg_train.txt', one is for depth images' file address, another is for corresponding color images' address. The code creates two FIFOQueues, one reading these two files, another reading all file names.I am usingtf.train.QueueRunnertoo. But I am not sure I understantd it. I got inspiration fromhere, although it reads TFRecords files. Hope this could help you.
I have multiple csv files which contain features. One feature is the filename of an image. I want to read the csv files line by line, push a path to the corresponding image into a new queue. Both queues should be processed in parallel.csv_queue = tf.FIFOQueue(10, tf.string) csv_init = csv_queue.enqueue_many(['sample1.csv','sample2.csv','sample3.csv']) path, label = read_label(csv_queue) image_queue = tf.FIFOQueue(100, tf.string) image_init = image_queue.enqueue(path) _, image = read_image(image_queue) with tf.Session() as sess: csv_init.run() image_init.run() print(sess.run([key, label, path])) # works print(sess.run(image)) # works print(sess.run([key, label, path])) # works print(sess.run(image)) # will deadlock unlike I do iq_init.run()Implementation of helper functions (e.g.read_csv) can be foundhereCan I "hide" the call toiq_init.run()behindsess.run(image)to avoid deadlock and allow batching?
Combine queues for async io with auto enqueue in tensorflow