Response
stringlengths
8
2k
Instruction
stringlengths
18
2k
Prompt
stringlengths
14
160
Yes, it's possible. In your code, perform a loop like thisFor all sources (array of source Name for example)Create the Pubsub reader on this source (you get a PCollection)Apply the transformation on the PCollectionCreate the sink dedicated to the Source for the transformed PCollectionYou reuse the transformation but the source and the sink are specific. Your Dataflow graph will show you thispubsub_topic_abc ---> transformation ---> gcs_bucket_abc pubsub_topic_def ---> transformation ---> gcs_bucket_def pubsub_topic_ghi ---> transformation ---> gcs_bucket_ghiBut all will run in the same dataflow job.
I have 3 different Pubsubs (source) and 3 corresponding GCS buckets (sink) for them processing similar data. Currently my Java application provisions three Cloud Dataflow assets which write the data from the Pubsubs to the GCS buckets using windowed writes.Current pipelines: pubsub_topic_abc ---> dataflow_abc ---> gcs_bucket_abc pubsub_topic_def ---> dataflow_def ---> gcs_bucket_def pubsub_topic_ghi ---> dataflow_ghi ---> gcs_bucket_ghiIs there a way I could make a pipeline to use a single Dataflow which could read data from multiple sources and write them to multiple corresponding sinks? Basically, data frompubsub_topic_abcshould go togcs_bucket_abcetc.Desired pipeline: pubsub_topic_abc ---- ---> gcs_bucket_abc | | pubsub_topic_def -------> dataflow -------> gcs_bucket_def | | pubsub_topic_ghi ---- ---> gcs_bucket_ghiI found thislinkwhich explains how a Dataflow can read from multiple Pubsubs but I am not sure how to implement the multiple sink write feature (dynamic output paths?). Is it possible?
Can we write data from multiple Pubsub (source) to multiple GCS (sink) using a single Google Cloud Dataflow?
I would usescpfor this task. Here's an example of me copying over a file calledfoo.shto the remote host:scp -i mykey.pem foo.sh "[email protected]:/usr/tmp/foo.sh"in the example:mykey.pemis my .pem filefoo.shis the file I want to copy acrossec2-userthe user on the host123-123-123-123the (fake) public ip address of the host/usr/tmp/foo.shthe location where I want the file to be
I want to write a jenkins pipeline in which at a particular step i have to copy few zip files from a different linux machine. The pipeline will be running on an AWS EC2 agent. I have to copy the zip files from linux machine to AWS EC2 instance.i tried using few ways to handle this using curl and scp but not able to achieve it. Is there a better way to achieve it.With curl : i am facing connection reset by peer error. Please help
Copying files from a linux machine to an aws ec2 instance
Problem is with thispipeline = Pipeline(stages=[stage_string,stage_one_hot,assembler, rf])statementstage_stringandstage_one_hotare the lists ofPipelineStageandassemblerand rf is individual pipelinestage.Modify your statement as below-stages = stage_string + stage_one_hot + [assembler, rf] pipeline = Pipeline(stages=stages)
Facing this error for this code:stage_string = [StringIndexer(inputCol=c, outputCol=c + "_string_encoded") for c in categorical_columns] stage_one_hot = [OneHotEncoder(inputCol=c + "_string_encoded", outputCol=c + "_one_hot") for c in categorical_columns] assembler = VectorAssembler(inputCols=feature_list, outputCol="features") rf = RandomForestClassifier(labelCol="output", featuresCol="features") pipeline = Pipeline(stages=[stage_string,stage_one_hot,assembler, rf]) pipeline.fit(df)Cannot recognize a pipeline stage of type <class 'list'>. Traceback (most recent call last): File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/ml/base.py", line 132, in fit return self._fit(dataset) File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/ml/pipeline.py", line 97, in _fit "Cannot recognize a pipeline stage of type %s." % type(stage)) TypeError: Cannot recognize a pipeline stage of type <class 'list'>.
how to handle string indexer and onehot encoder in pyspark pipeline stages
I don't think your commanddid work, is the problem.The-Directoryswitch ofGet-ChildItemmakes it only return directories, not files. If you want to return files, use the-Fileswitch.Next up, if you have a list of items fromGet-ChildItem, those give you aSystem.IO.FileSystemInfoobject. We can provide those directly to theGet-Contentcommand to read the file into a string.From a string, you can call any ofthe general operatorsPowerShell offers, including the string replace operator,-Replace. The output of this can be piped over toSet-Contentand used to update or append content to an existing file.Get-ChildItem | foreach { (get-content $_) -replace "str1", "str2" | Set-Content $_.FullName }Note the only real change here is that I removed-DirectoryfromGet-ChildItem, and then fixed the syntax on the$PSItem(the official name for the present variable in aforEachloop, often written as$_).The reason you can use the syntax I showed is thatforEach-objectgives you that special$_or$PSitemvariable to use to reference$thisin a collection.
I have the following directory tree composed of a root directory containing 10 subdirectories, and 1 file in each subdirectory.root/dir1/filedir2/file...dir10/fileI would like to edit the content of the files recursively, replacing a string "str1" by "str2". I issued the following command in powershell:Get-ChildItem -Directory | foreach {(get-content $_/file) -replace "str1", "str2" | set-content $_/file}And it worked like a charm, but I still do not understand how. I use a pipeline in the foreach loop, but the following call to $_ still refers to the pipeline outside the foreach loop. Why is it so?
Nested pipeline variable in powershell
In reality thepipeline pluginis not just one plugin but a bunch of 6 to 8 plugins. So you may want to install all thepipelinerelated items in youravailable pluginssection of Manage Jenkins. Once this is done, a reboot of Jenkins is required for the changes to take effect. Here are some of them:https://plugins.jenkins.io/build-pipeline-plugin/https://www.jenkins.io/doc/pipeline/steps/pipeline-build-step/https://plugins.jenkins.io/pipeline-stage-view/
I have started learning Jenkins recently. I installed docker on a server which I created on AWS server and using docker I have installed Jenkins. I wanted to test aHello pipeline stageby creating new item, but when I go to the pipeline tab I cant see any options likepipeline scriptorpipeline script from SCM. I have installedgit pluginandpipeline pluginalso seems to be installed successfully. I am not able to continue my study further. I will be really thankful if someone can help me here.
Jenkins job - no options found under pipeline section
You are correct, when you--updatea pipeline it will process new data but will not re-load old data. It sounds like what you want isslowly updating side inputswhich unfortunately has not been implemented yet. You could instead try draining and re-starting your pipeline.
I have a Dataflow pipeline running that fetches a configuration of active tenants (stored in GCS) and feeds it into anActiveTenantFilteras a sideInput. The configuration is rarely updated, hence why I decided to re-deploy the pipeline, using the--updateflag, whenever it is updated.However, when using the update flag, the file is not fetched again, i.e., the state is maintained. Is it possible to enforce that thisPCollectionViewis updated whenever the pipeline is re-deployed?
Force update of SideInput on updating Dataflow pipeline
You should provide where thepipelinefunction comes from. It might be a third party library or from node'sstreamIf it is thestream.pipelinethan the return of it is a stream, soawait pipeline(...will not wait since it is not aPromise. You can turn the stream into aPromisewith util,Referenceconst util = require('util'); const { pipeline } = require('stream'); const pipelinePromise = util.promisify(pipeline); // ... await pipelinePromise();
i am using a for..of loop with pipeline, but the statements after the loop are executed even before the pipeline finish exection, this happens even if i add await to pipelinehere is my relevant codefor(const m of metadata) { if(m.path) { let dir = `tmp/exports/${exportId}/csv_files_tranformed/${m.type}`; let fname = `${dir}/${m.sname}`; fs.mkdirSync(dir,{recursive: true}, (err) => { if(err) throw err; }); tempm = m; await pipeline( fs.createReadStream(m.path), csv.parse({delimiter: '\t', columns: true}), csv.transform((input) => { return input; }), csv.stringify({header: true, delimiter: '\t'}), fs.createWriteStream(fname, {encoding: 'utf16le'}), (err) => { if (err) { console.error('Pipeline failed.', err); } else { console.log('Pipeline succeeded.'); } } ) } }How do i ensure that the pipeline is fully completed before moving to the next statements.Thanks.
Node.js: for..of with pipeline not waiting for completion
First, please readWhy is $false -eq "" true?The same applies toWhere-Object ... -eq[pscustomobject]@{ val = 0 } | Where-Object val -eq "" # returns an object [pscustomobject]@{ val = "" } | Where-Object val -eq 0 # nullAs you've already noticed, the type of the left-hand object is[CommandInfo]while the right type is[String].Now, there are several way to make your code work.# ReferencedCommand.Name = "Get-Content" Get-Alias | Where-Object { $_.ReferencedCommand.Name -eq "Get-Content" } # put [string] in the left Get-Alias | Where-Object { "Get-Content" -eq $_.ReferencedCommand } # `-Like` operator casts the left side as [string] Get-Alias | Where-Object -Property ReferencedCommand -Like Get-Content
I'm trying to hone my PowerShell skills, and as an exercise I'm trying to get all aliases pointing to Get-Content(note: I'm fully aware that the easy way to do this is simplyGet-Alias -Definition Get-Content, but I'm trying to do this using pipeing)My attempt is to run something like:Get-Alias | Where-Object -Property ReferencedCommand -eq Get-ContentorGet-Alias | Where-Object -Property ReferencedCommand -eq "Get-Content"but that returns blank.RunningGet-Alias | Get-Memberreveals that the ReferencedCommand is aSystem.Management.Automation.CommandInfowhich could explain why my attempts does not return anything.Now I don't know where to go from here.Anyone?
Matching properties of commands
The accepted way is:require 'English' # Capital 'E'! $CHILD_STATUS.exitstatusNote that theEnglishlib is a standard lib that comes bundled with all versions of Ruby
I've been trying to make my pipeline (following rubocop syntax) and Linux/ Windows machines happy but for some reason I am stuck in the exit status checking. It is causing problem.I have used the following and the below results:$?.exitstatus-NOT OKin rubocop (syntax concerns); OK in Linux; OK in Windowssystem()- OK in rubocop; OK in Linux;NOT OKin Windows (it isnot recognized as an internal or external command)$CHILD_STATUS.exitstatus- OK in rubocop;NOT OKin Linux (it needsrequire 'English'library); OK in WindowsI don't want to install anything in the machines. Any best way to make it all OK?Thank you.
What is the best EXIT status check Command that works in Rubocop, Linux and Windows servers?
I found the solution to my problem in Microsoft's Azure Pipeline documentation.Build.SourceBranchThe branch of the triggering repo the build was queued for. Some examples:Git repo branch:refs/heads/masterGit repo pull request:refs/pull/1/merge
I'm looking to have different stages of my ADO pipeline run depending on the outcome of a condition. For this condition, I want to compare the name of the branch (the one that triggered the pipeline to run) up against a string literal. I can't do this until I can access the name of this branch in a dynamic way.For example, I don't want my production stage to run, unless the branch that triggered the stage is namedmaster.I am familiar with the predefined variableBuild.Repository.Namebeing used to get the name of the repository, but what I really need is the name of thebranchin that repository that triggered the pipeline.So, is there a variable that holds my triggering branch's name? and if so, what is it?
How can I know which git branch triggered my ADO pipeline?
You need to use$mapand$rangeoperators.The following query will be helpful:db.collection.aggregate([ { $project: { arr: { $map: { input: { $range: [ 0, { $size: "$arr" }, 3 ] }, as: "a", in: { $arrayElemAt: [ "$arr", "$$a" ] } } } } } ])MongoPlayGroundLink
I have mongodb Database and I have this document structure :{ "_id": ObjectId("2145214545ffff"), "arr":[a , b , c , d , e , f , g , h , i, j] }I want to execute aggregation pipeline , which gives me this result:{ "_id": ObjectId("2145214545ffff"), "arr":[a , d , g , j ] }So what i need is kind of filtering on the array's items , which give me the 1st , 4th , 7th and so on items. Thx in advanced
filter Items of Array by Index in Aggregation Pipeline
I managed to fix this error by installing Visual C++ redist on my Windows machine.https://support.microsoft.com/en-us/help/2977003/the-latest-supported-visual-c-downloads
I am trying to download the french module for Spacy with the command python -m spacy download fr_core_news_md , but it get error:Traceback (most recent call last): File "C:\Users\Lenovo\AppData\Local\Programs\Python\Python38\lib\runpy.py", line 184, in _run_module_as_main mod_name, mod_spec, code = _get_module_details(mod_name, _Error) File "C:\Users\Lenovo\AppData\Local\Programs\Python\Python38\lib\runpy.py", line 143, in _get_module_details return _get_module_details(pkg_main_name, error) File "C:\Users\Lenovo\AppData\Local\Programs\Python\Python38\lib\runpy.py", line 110, in _get_module_details __import__(pkg_name) File "C:\Users\Lenovo\AppData\Local\Programs\Python\Python38\lib\site-packages\spacy\__init__.py", line 12, in <module> from . import pipeline File "C:\Users\Lenovo\AppData\Local\Programs\Python\Python38\lib\site-packages\spacy\pipeline\__init__.py", line 4, in <module> from .pipes import Tagger, DependencyParser, EntityRecognizer, EntityLinker File "pipes.pyx", line 1, in init spacy.pipeline.pipes ImportError: DLL load failed while importing nn_parser: The specified module could not be found.How to fix it?Python 3.8.2 (64 bit) on Windows 10*64Thank you!
Spacy_DLL load failed while importing nn_parser
Your code is not provided however from the name ofridge_grid_searchI suppose you are usingsklearn.model_selection.GridSearchCVfor model selection. GridSearch should be used to tune the hyperparameters of a single model and should not be used to compare different models with each other.ridge_grid_search.best_score_returns the best score achieved by the best hyperparameters found during the grid search og the given algorithm.For model comparison you should use a cross validation algorithm such ask-fold cross validationWhile using cross validation make sure every model is trained and tested on the same training/testing sets for fair comparison.
I ran an experiment with several models and generated a best score for each of them to help me decide on the one to choose for the final model. The best score results have been generated with the following code:print(f'Ridge score is {np.sqrt(ridge_grid_search.best_score_ * -1)}') print(f'Lasso score is {np.sqrt(lasso_grid_search.best_score_ * -1)}') print(f'ElasticNet score is {np.sqrt(ElasticNet_grid_search.best_score_ * -1)}') print(f'KRR score is {np.sqrt(KRR_grid_search.best_score_ * -1)}') print(f'GradientBoosting score is {np.sqrt(gradientBoost_grid_search.best_score_ * -1)}') print(f'XGBoosting score is {np.sqrt(XGB_grid_search.best_score_ * -1)}') print(f'LGBoosting score is {np.sqrt(LGB_grid_search.best_score_ * -1)}')The results are posted here:Ridge score is 0.11353489315048314 Lasso score is 0.11118171778462431 ElasticNet score is 0.11122236468840378 KRR score is 0.11322596291030147 GradientBoosting score is 0.11111049287476948 XGBoosting score is 0.11404604560959673 LGBoosting score is 0.11299104859531962I am not sure how to choose the best model. Is XGBoosting my best model in this case?
I am not clear on the meaning of the best_score_ from GridSearchCV
Anyone with write access to the repository can change that file and add themselves to the list. But my concern with the scripted approach is a different one, as you may end up confusing your future self or your successor(s) with a block of code to maintain. The person who triggers a job may be a different person than the one who would review the code, also I'm not sure if you want to assume that triggering equals approval.Let me explain what I've been doing: Imagine having adevelopmentand amasterbranch - onlymastergets deployed automatically. Nobody can push directly tomasterwhen youprotect the branch. Everything has to be merged intomasterby using merge requests. Permission-wise it still comes down to having write access (developer or a higher role). This keeps the step of approval in the GUI (where you would expect an approval feature to appear once it's added). Additionally, accepting a merge request will trigger a dedicated e-mail notification.This works very well when you make sure merging intomasteris a taboo except for the lead developers. Maybe this is not applicable for your situation but a different approach; and even this one needed my successor to be educated on where to click in order to approve a merge request.If you want to fast-track the deployment of some developers you can assign them a higher role and use my suggested approach to let them push to themasterbranch directly. That way they skip the step of awaiting approval.
I want to discuss and to know more approaches about how to add a approval step on Gitlab CI Everybody knows that gitlab doesn't has a approval feature on the pipelineI did a small template to add a kind of approval step in any stepthis is the content of the template.approval_template: image: node:buster before_script: - | cat <<'EOF' >> approval.js var users = process.env.USERS.split(','); if (users.includes(process.env.GITLAB_USER_LOGIN)) { console.log(process.env.GITLAB_USER_LOGIN + " allowed to run this job") process.exit(0) } else { console.log(process.env.GITLAB_USER_LOGIN + " cannot trigger this job") console.log("List of users allowed to run this job") for (user in users) { console.log(users[user]) } process.exit(1) } EOF - node approval.js - rm approval.js when: manualhow does it work?This template checks if the user who triggered the job is inside the USERS variable (array) The check happens on before_script block and it works in any step (without overwrite the before_script)It works very well but I want to know if there is a better approach to do it.
Approval step on gitlab CI
What you are essentially trying to do here is to have aCOPYcommand inside aRUNcommand.Dockerfiles don't have nested commands.Moreover, aRUNcommand runsinsidean intermediate container built from the image. Namely,ARG arg=awill create an intermediate image, then docker will spin up a container, and use it to run theRUNcommand, and commit that container as the next intermediate image in the build process.soCOPYis not something that can run inside the container, and in factRUNbasically runs a shell command inside the container, andCOPYis not a shell command.AFAICT dockerfiles don't have any means of doing conditional execution. The best you can do is:COPY test.txt RUN if [ "$arg" = "a" ] ; then \ echo arg is $arg; \ else \ echo arg is $arg; \ rm -r test.txt \ fiBut keep in mind that if test.txt is a 20GB file, the size of your image will still be > 20GB.
I have a Dockefile, in which I want to copy certain files based on input environment variable. So far I have tried the following. I am able to verify that my environment variable is passed correctly. During my docker build I get the following error -->> /bin/sh: COPY: not foundARG arg=a RUN if [ "$arg" = "a" ] ; then \ echo arg is $arg; \ COPY test.txt / else \ echo arg is $arg; \ fi
Conditional check in Dockerfile
It seems your agent does not have maven installed on it. Also, check if maven is installed on the path mentioned here.
I'm doing a pipeline, when doing a mvn clean install command in my project, this error mvn not reconegize as an internal command or external, an operable program or a batch file. My project is a Maven java project, my operating system is windows 10JenkinsFilepipeline { agent any stages { stage('Unit tests') { steps { // Run the maven build // if (isUnix()) { // sh "'${mvnHome}/bin/mvn' clean test -Dtest=TestRunner" // } else { bat 'mvn clean test' // } } } stage("build & SonarQube analysis"){ agent any steps { withSonarQubeEnv('My SonarQube Server') { bat 'C:/Users/ANTONIO/Documents/sonar-scanner-4.2.0.1873-windows/bin/sonar-scanner' } } } stage("Quality Gate") { steps { timeout(time: 1, unit: 'HOURS') { error "Pipeline aborted due to quality gate failure: ${qg.status}" waitForQualityGate abortPipeline: true } } } }}
Jenkins pipeline, mvn not reconegize as an internal command or external, an operable program or a batch file
These kind of diagrams are necessarily incomplete, so I wouldn't take it too seriously.The text mentions nothing about virtual memory, address translation, load/store failures, or even instruction memory address translations & failures, etc..Other things commonly missing in these diagrams includes:the capture of thepcforjal-type instructions — there's no datapath that forwards thepcto the registers.the change ofpcfrom register, for jump register forjr-type — there's no data path for a register to go to thepcinstructions.there is also nothing in these kind of diagrams for cache misses (I$ or D$)I already mentioned missing address translation for both data and instruction memoriesthe AUIPC instruction is also missing some datapathsSo, the diagram is certainly incomplete.There is very likely some handling of exceptions for data memory accesses (also for instruction memory accesses) — it's just not mentioned in this diagram as this diagram ignores address translation in general.
Here is img of RISC-V pipeline with flush for exceptionsI have question on pipeline flush for exceptions. In RISC-V, there is IF.flush, ID.flush and EX.flush in pipeline for exceptions. But I wonder why there is no MEM.flush in pipeline for exceptions. I think that if we detect the exception in MEM stage (ex. Invalid Data memory access), we have to flush MEM stage to make MEM.RegWrite value 0.Thank you.
Why there is no MEM.flush in pipeline for exceptions?
So you may not have the same issue I had but it's definitely one cause.Lets say you start with a branch called "master" and a build pipeline called "Build Master" which uses an YAML file called "azure-pipelines.yml".If like me you created a feature branchbeforesetting up the build for the master branch then your feature branch won't have the azure-pipelines.yml file.To make the "Run" enabled make sure when using the "Build Master" pipeline that the branch you are trying to build has an azure-pipelines.yml file in as it will use that yml file for the build andnotthe one in the "master" branch. If it doesn't then I just created one in the branch and copied over the contents from the master branch.Hope that makes sense.
I am new to Azure DevOps Pipeline. I have connected to a bitbucket repo and running against master for the build works fine.I am trying to manually run against a branch. I choose the correct branch from branch/tag and I see that the "Run" button is disabled. If I choose "master" it is enabled.What am I missing?
Run Pipeline Button disabled
I have come to learn recently that single threaded behavior is guaranteed by using a single worker which is n1-standard-1 and additionally using the following exec_arg --numberOfWorkerHarnessThreads=1 as this restricts the number of JVM threads to 1 as well.
I was trying to configure and deploy a Cloud Dataflow job that is truly single threaded to avoid concurrency issues while creating/updating entities in the datastore. I was under the assumption that using an n1-standard-1 machine ensures that the job is running on a single thread, on a single machine, but I have come to learn the hard that this is not the case.I have gone over the suggestions mentioned in an earlier query here-Can I force a step in my dataflow pipeline to be single-threaded (and on a single machine)?But I wanted to avoid implementing a windowing approach around this, and wanted to know if there is a simpler way to simply configure a job to ensure single threaded behavior.Any suggestions or insights would be greatly appreciated
Can I configure a Dataflow job to be single threaded?
Found the solution:While unpickling the file i usedencodingasbytesinstead oflatin1.Open the pickle using latin1 encodingwith open(old_pkl, "rb") as f: loaded = pickle.load(f, encoding="latin1")and everything worked fine. For better clarification referthis
I am trying to convert pickle file fromPython 2toPython 3using below code:import os import dill import pickle import argparse def convert(old_pkl): """ Convert a Python 2 pickle to Python 3 """ # Make a name for the new pickle new_pkl = os.path.splitext(os.path.basename(old_pkl))[0]+"_p3.pkl" # Convert Python 2 "ObjectType" to Python 3 object dill._dill._reverse_typemap["ObjectType"] = object # Open the pickle using latin1 encoding with open(old_pkl, "rb") as f: loaded = pickle.load(f, encoding="bytes") # Re-save as Python 3 pickle with open(new_pkl, "wb") as outfile: pickle.dump(loaded, outfile)Pickling worked fine. But, the problem is when i tried to print the output ofPython3pickled file instead of showing below:model = Pipeline([('count', CountVectorizer()) ]) print(model) Pipeline(memory=None, steps=[('count_vectorizer', CountVectorizer(analyzer='word', binary=False, decode_error='strict', dtype=<class 'numpy.int64'>, encoding='utf-8', input='content', lowercase=True, max_df=1.0, max_features=None, min_df=1, ngram_range=(1, 1), preprocessor=None, stop_words=None)])it's showing below:Pipeline(memory=None, steps=None, verbose=None)
unpickling model file python scikit-learn(Pipeline(memory=None, steps=None, verbose=None))
Does this mean the build in lemmatisation process is an unmentioned part of the pipeline?Simply, yes. TheLemmatizeris loaded when theLanguageandVocabare loaded.Usage example:import spacy nlp=spacy.load('en_core_web_sm') doc= nlp(u"Apples and oranges are similar. Boots and hippos aren't.") print('\n') print("Token Attributes: \n", "token.text, token.pos_, token.tag_, token.dep_, token.lemma_") for token in doc: # Print the text and the predicted part-of-speech tag print("{:<12}{:<12}{:<12}{:<12}{:<12}".format(token.text, token.pos_, token.tag_, token.dep_, token.lemma_))Output:Token Attributes: token.text, token.pos_, token.tag_, token.dep_, token.lemma_ Apples NOUN NNS nsubj apple and CCONJ CC cc and oranges NOUN NNS conj orange are AUX VBP ROOT be similar ADJ JJ acomp similar . PUNCT . punct . Boots NOUN NNS nsubj boot and CCONJ CC cc and hippos NOUN NN conj hippos are AUX VBP ROOT be n't PART RB neg not . PUNCT . punct .Check outthisthread as well, there is some interesting information regarding the speed of the lemmatization.
I want to use lemmatisation, but I can't directly see in the docs how to use Spacys built in lemmatisation in a pipeline.In the docsfor the lemmatiser, it says:Initialize a Lemmatizer. Typically, this happens under the hood within spaCy when aLanguagesubclass and itsVocabis initialized.Does this mean the build in lemmatisation process is an unmentioned part of the pipeline?It's mentioned in the docs as part of the pipeline subheadingwhile in thedocs for the pipeline usagethere is only mention of "custom lemmatisation" and how to use it.This is all kind of conflicting information.
How to use spacys built in lemmatiser in a spacy pipeline?
You need to use multiline syntax for thecommandargument.Try this to getcdworking as expectedsshCommand remote: remote, command: """ cd /abc/set/ pwd """The current directory is reset with each invocation ofsshCommandbecause it is a new shell. I'm not sure about thesshScriptthough. You may consider changing your script file to be included with these operations in the multiline syntax.
In my jenkins scripted pipeline, one stage I am running is a bash script in remote machine. I tried few ways as follows but not coping with the following requirement :Since I wanted to remote login to the server and then run few commands to deploy on the same. I am not able tocdusing SSH PipelineSo I want to use sshCommand to run acdcommand on remote server and execute a script. What happens is that except cd command all other shell commands are getting executed.stage("CONFIGURE ENV") { withCredentials([usernamePassword(credentialsId: 'xxxxxxxxxx', passwordVariable: 'Password', usernameVariable: 'Username')]) { remote.user = Username remote.password = Password sshCommand remote: remote, command: "cd /abc/set/" sshCommand remote: remote, command: "pwd" sshScript remote: remote, script: "env.sh"Error message I keep on getting while running the build:
Not able to cd while using SSH pipeline sshCommand in jenkinsfile
An interprocess pipes and Redis' network pipelining are different things. One is explained athttps://www.tutorialspoint.com/inter_process_communication/inter_process_communication_pipes.htmand the other athttps://en.wikipedia.org/wiki/Protocol_pipelining
The essence of the redis pipeline is to change the read and write order of the instructions in the pipeline. We usually say that the pipeline is a means of inter-process communication, and the redis pipeline is socket-based communication, the two are not comparable, is there a problem with this understanding?
redis pipeline and pipeline
If there are some limitations that you can not change in Data Transfer I would advice you usingpythonwith AWS SDK and Google Cloud Library for reading from S3 and writing in BigQuery respectively. You can find these libraries in other languages though.I also would advice you to use some serverless architecture for that. InGCPyou could useCloud Functionfor that if your transfer lasts less than9 minutes(This is a Cloud Function limitation). InAWSyou could useLambda Functionif your transfer lasts less than15 minutesIf your transfer needs more time, you could use a VM in Compute Engine for that. In this case you could also use Cloud Schedule to turn on and turn off your VM in the exactly time you want. You can find the tutorial for thathereFeel free to provide some extra information if you have any question.
Currently, I have a data flow set up like belowAWS S3 (csv format) -> Data Transfer Service (once a day) -> Google Big QueryHowever, I would like to change the rate of data transfer, but since transfer service doesn't offer that I would have to implement my own methods.What would be your recommendations? (Currently, I am thinking of just using AWS sdk to get the objects and then insert them using Google big query client, but I haven't tried it yet, and due to lack of my understanding I don't know if that's even possible or scalable... give me a hint or recommendations. Thank you)
Hourly data loading from AWS S3 to Google Big Query
Your first script is actually not a script because it does not contain any code. ;-) Let's call it data. You could create a [PSCustomObject] and use its properties to sort the way you like it ... like this:$data = Get-Content -Path C:\sample\data.txt $data | ForEach-Object { $row = $PSItem -split ':' [PSCustomObject]@{ Name = $row[0].Trim() CoffeeCount = [int]$row[1].Trim() } } | Sort-Object -Property CoffeeCount -Descending
Actually I wrote a powershell script that has an output in this "[name]: [piece]" format (name means the customer name and piece is the number of Coffe the actual customer drank.) This script is called first.ps1For example:Josh: 9Sam: 13Mark: 2My problem is that i have to sort this output with another script just like that C: .\first.ps1 | .\second.ps1 (second.ps1 is my sorting script)in that case "C: .\first.ps1 | .\second.ps1" concerning the previous example my output should be like thisSam: 13Josh: 9Mark: 2I tried some code and i can succesfully read the input through pipeline but i have some sorting problems i can only sort by names i havent found anything to sort by the numbersThere is the code i tried so far$input | %{ $name=$_.split(":")[0] $piece=[int]$_.split(":")[1] Write-Output $name" "$piece } | sort -DescendingThank you in advance
Sorting a text by second elem in Powershell
TheTrueis coming from the expire call in the pipeline. In isolation:>>> p.hincrby('key', 'val', 1) Pipeline<ConnectionPool<Connection<host=localhost,port=6379,db=0>>> >>> p.expire('key', 120) Pipeline<ConnectionPool<Connection<host=localhost,port=6379,db=0>>> >>> print(p.execute()) [1L, True]
For performance improvements, I have used the Redis pipeline instead of single insertions. Please find the code snippet for the same.r = redis.Redis(connection_pool=redis_pool) r_pipeline = r.pipeline() for key in keys: r_pipeline.hincrby(key, hash, amount) r_pipeline.expire(key, Globals.Cache.ttl) return r_pipeline.execute()the return value of r_pipeline.execute() is a list. Based on the documentation it supposes to increment and return the incremented value. But sometimes it is actually returning the value and sometimes it is just returningTrue.I have gone through the documentation and did google but still not able to figure out why hincrby is returning True in the pipeline.Can someone please help.
hincrby and hget returning True instead of actual value in Redis (python)
Pipedream exposes the event data from HTTP triggers in a JavaScript object calledevent, accessible in any code or action step.By default,eventcontains some standard properties likeevent.body(the HTTP payload),event.headers(HTTP headers), and more. [1]I'm not sure what you mean by "tag" in the context of HTTP, but once you send an event, you should be able to select it and view its contents just below the HTTP trigger itself, inspecting the values of the event:Image of event inspector[1]https://docs.pipedream.com/workflows/events/#http
Using Pipedream, How do I get a Name and Content tag from the HTTP Trigger?I could not find anything like this online and want people who know something to give me answers.
Using Pipedream, How do I get a Name and Content tag from the HTTP Trigger?
I'm not entirely certain what the error you're getting is. If you're literally trying to dofrom .nodes import CleanData.function1that won't work. Imports don't work like that in Python. If you do something like this:nodes.pyhas:class CleanData: def clean(arg1): passandpipeline.pyhas:from kedro.pipeline import Pipeline, node from .nodes import CleanData def create_pipeline(**kwargs): return Pipeline( [ node( CleanData.clean, "example_iris_data", None, ) ] )that should work.
I want to organize the node functions by Classes in the nodes.py file. For example, functions related to cleaning data are in the "CleanData" Class, with a @staticmethod decorator, while other functions will stay in the "Other" Class, without any decorator (the names of these classes are merely representative). In the pipeline file, I tried importing the names of the classes, the names of the nodes and the following way: CleanData.function1 (which gave an error) and none of them worked. How can I call the nodes from the classes, if possible, please?
How to run functions from a Class in the nodes.py file?
I don't know whether the data factory expression language supports Regex. Assuming it does not, the Wildcard is probably positive matching only, so Wildcard to exclude specific patterns seems unlikely.What you could do is use 1) Get Metadata to get the list of objects in the blob folder, then 2) a Filter where the item().type is 'File' and index of '-2192-' in the file name is < 0 [the indexes are 0-based], and finally 3) a ForEach over the Filter that contains the Copy activity.
So I have a pipeline in AzureDataFactoryV2, in that pipeline I have defined a copyActivity to copy files from blob storage to Azure DataLake Store. But I want to copy all the files except the files that have "-2192-" string in them.So If I have these files:213-1000-aasd.csv 343-2000-aasd.csv 343-3000-aasd.csv 213-2192-aasd.csvI want to copy all using copyactivity but not 213-2192-aasd.csv. I have tried using different regular expression in wildcard option but no success.According to my knowledge regular expression should be:[^-2192-].csvBut it gives errors on this. Thanks.
How to use wildcards in filename in AzureDataFactoryV2 to copy only specific files from a container?
You can define environment variabledynamically, e.g.:pipeline { agent any environment { GIT_MESSAGE = """${sh( script: 'git log --no-walk --format=format:%s ${GIT_COMMIT}', returnStdout: true )}""" } stages { stage('test') { steps { sh 'echo "Message: --${GIT_MESSAGE}--"' } } } }
I have a question about a Jenkins pipeline. I would like to have a variable that gets data from the git commit.For example, if the git commit says "Version 1.0.0" then the variable in the Jenkins file should be "1.0.0". If the commit are 2.0.0 then the variable should be 2.0.0I've already seen that with the optionchangelogin Jenkins you can get data from the Git Commit, unfortunately I don't know how to put this data into a variable ?Can anyone help me ?I have already seen and tried the followingpipeline { when { changelog '1.0.0.0' } environment { nicevariable = " here should be the gitcommit see changelog" } agent none stages { stage("first") { sh "echo ${nicevariable}" } } }
Jenkins Pipeline get variable from git commit
I assumefoois a sklearn pipeline object, if so, you can probably do this:for e in foo.named_steps['reg'].estimators_: print(e.best_iteration_)foo.named_steps['reg'].estimators_returns a list of estimators inside of MultiOutputRegressor.eis the LGBMRegressor you used inside of your MultiOutputRegressor.You can replacebest_iteration_with any attributes of the model you wanted to access.
Suppose a machine learning model, such as LightGBM'sLGBMRegressor, has an attributebest_iteration_. How is this attribute accessible after calling thefitmethod, whereby sklearn'sPipelineandMultiOutputRegressorwere utilized?ForPipelineI've triednamed_steps:foo.named_steps['reg']which returns the following objectsklearn.multioutput.MultiOutputRegressor.Then, I've tried.estimators_:foo.named_steps['reg'].estimators_which returns a list. However, the list contains the initial parameters that were supplied to the model.Could someone please explain the ideal way to access a model's attributes?
Access attributes with sklearn Pipeline and MultiOutputRegressor
Have you tried using curl to test it first? For example, assuming you're targeting SDC (not SCH), something like this should work:curl -u admin:admin -X POST http://10.0.0.199:18630/rest/v1/pipeline/SDCHTTPClientd8bc16bc-4b4a-49cd-ba4c-41d7831ff5bd/resetOffset -H "X-Requested-By: SDC"From HTTP Client, what do you have set for auth type and are you passing the right credentials?Are you passing header attribute X-Requested-By?In any case, are you getting any errors?Note that this REST API endpoint does not return response body... just response code 200
I want to reset the origin of a StreamSets pipeline, using another pipeline. I made a pipeline that sends 1 useless record to HTTP client component. The HTTP client contains the RESTFUL URL to reset the origin of a pipeline. It's something like that:Resource URL: http://<hostname>:<port>/rest/v1/pipeline/StreamSetsPipelinec78f8739-8adb-47ad-beaa-77b3de60038d/resetOffsetHTTP method is POSTI tested it and it doesn't reset the origin.Can anyone help me?
Reset Origin of a StreamSets Pipeline using another pipeline
The short answer to your question is no, unless you hack your way through and redefine/overwrite thescikit-learnfunctions.When you are usingpipe.score(), it calls the score method from the classifier that is in the end of the pipeline.Now, what is happening under the hood is that all classifiers inscikit-learnare based on theClassifierMixinclass, for which.score()is defined throughaccuracy_score, and this is hard-coded (seehere).
Here is a part of my code:def ml_pipeline(self): if self.control_panel['ml_pipeline_switch']: self.model = make_pipeline(self.preprocessor, self.control_panel['ml_algo'][1]) self.model.fit(self.X_train, self.y_train) def ml_pipeline_result(self, show_control_panel_switch=True): if self.control_panel['ml_pipeline_switch']: print('Model score (training set): %.3f' % self.model.score(self.X_train, self.y_train)) print('Model score (test set): %.3f' % self.model.score(self.X_test, self.y_test))Thescore()seems to be producing accuracy. How can I swap accuracy with another performance metrics such asF1-macroorrecall-macro? I couldn't find anything in thedocument.
How do I swap accuracy with another performance metric from the score() function after fitting the ML model in Pipeline with scikit-learn?
You can just run this in interactive mode:from sklearn.pipeline import _name_estimators estimators = ['a', 'a', 'b' ] _name_estimators(estimators) # >>> [('a-1', 'a'), ('a-2', 'a'), ('b', 'b')]So basically it returns tuples, with unique keys. Every tuple contains the estimator + if the estimators are duplicated its occurence and the raw estimators value.
from sklearn.pipeline import _name_estimators class MajorityVoteClassifier(BaseEstimator,ClassifierMixin): def __init__(self,classifiers,vote='classlabel',weights=None): self.classifiers = classifiers self.named_classifiers={key:value for key,value in _name_estimators(classifiers)} self.vote=vote self.weights=weights clf1=LogisticRegression(penalty='l2',C=0.001,random_state=1) clf2=DecisionTreeClassifier(max_depth=1,criterion='entropy', random_state=0) clf3=KNeighborsClassifier(n_neighbors=1,p=2,metric='minkowski') pipe1=Pipeline([['sc',StandardScaler()],['clf',clf1]]) pipe3=Pipeline([['sc',StandardScaler()],['clf',clf3]]) mv_clf=MajorityVoteClassifier(classifiers=[pipe1,clf2,pipe3])I am unable to understand how_name_estimatorswork so please can someone explain me what is_name_estimatorsdoing in this code
What _name_estimators does in the following code?
Where to store installer binaries in azure devops pipelineAlthough the installer is a tool that is not referenced in a project of our product, we still could use nuget to manage this tool.As we know, if binaries are referenced by the project, we always set the dlls files in thelibforlder in the nuget package, which will be added as references for compile as well as runtime.For those binaries are not referenced by the project, we could store those installer binaries in thetoolsfolder:<file src="..\..\InstallerBinariesA1.dll" target="tools" />You can check the documentFrom a convention-based working directoryfor some more details.Then, we could store this tool package in the Azure Devops artifacts and you could access the binaries of this tool in the\packagesfolder after nuget restoring it to generate the package.Hope this help.
I was wondering what other people use to store tools required in the pipeline of a product, which are not part of the actual product.Let me sketch our situation: We are using an azure devops pipeline to build and package our product. During the package process, we generate an full install and update package that can be shipped to customers. We have developed a separate tool that we use to to install our products (so this tool is not specific to the product of the pipeline, but is used for more or our product). We need to access the binaries of this tool (let's call this tool "installer") to generate the package.Now my question is, where do I store the "installer" tool, so my pipelines can access it? The installer is a tool that is not referenced in a project of our product, so a nuget package doesn't seem right. Simply downloading the artifacts of its pipeline also doesn't seem viable, since the runs of a pipeline are removed after x amount of time.
Where to store installer binaries in azure devops pipeline
There are two reasons why no agent might be assigned:The job specifies resources that no agent hasThe pipeline is not in the same environment (or not at all in an environment) that an agent is in.So please double-check the pipeline's and agent's environments and the resources.I hope this helps!
i have installGOCD(Go continuous delivery) on my Ubuntu VPS with its agent (Go-Agent) on same server.One pipeline (the 1st) have been created and it run smoothly to do any stage/job/task, there's one agent that always to do the job:but when i try create new pipeline, i dont know why no agent wont work to2nd pipeline. i just see message:Not yet assignedscreenshot :try wait up to one hour, but the agent never working for 2nd pipeline job.Any idea ?Thanks in advance...
gocd - Agent never doing second pipeline stage/job
Usepropagate:trueinstead offalse.This is from thedocumentation:propagate (optional)If set, then if the downstream build is anything but successful (blue ball), this step fails. If disabled, then this step succeeds even if the downstream build is unstable, failed, etc.; use the result property of the return value as needed.
Two jenkins jobs are connected as: Upstream job is a pipeline and triggers a freestyle job.Is it possible in Jenkins to get the following scenario:If downstream job is aborted the upstream job gets aborted. But if the upstream job is aborted the downstream job should not abort, but continue running.Upstream job : node('upstream_node'){ build job: 'downstream_job', wait:true, propagate:false }I have tried all possible combinations with 'wait' and 'propagate' options, but non of them will work.
downstream jenkins job does not get aborted if upstream job is aborted
Found out what was wrong. In the alter row step of the data flow, I had an update condition first and insert condition second. When I removed "Update" as the first condition all data was successfully inserted in my sink table. I originally thought that alter row step operated the conditions with an "OR" statement, but it seems it goes in order first to last statement. Since my first one was an update of the data that was not present in my sink table, it did not seem to jump to the insert condition at all.
I am very new to both SQL and Azure Data Factory and am trying to import some data from one table to another in the same azure sql database using azure data factory. To be able to use the data in my sink table, i need to transform some of the rows in the source. My flow looks like this:Data flow in Azure data factory.The dataflow executes successfully:Data flow results. However, data rows are not being copied to my sink table.I've even tried the "Recreate" table option on the sink, I can see that my column names in the sink table are being overwritten to match the source table, but still no rows are being transferred to the sink, they stay empty.Any suggestions on what I might be doing wrong?Thanks in advance!
Why is data not being transferred to my sink table after successfully completed Data Flow in Azure Data Factory?
After a couple of trials, I figured out what was wrong: Just type:xgb._best_estimator_.named_steps['xgb'].predict(test_final)
I developed a pipeline usingXGBoostwhich returned me a best estimator. However, trying to use this best estimator to predict my test set the following error is raised:"ValueError: Specifying the columns using strings is only supported for pandas DataFrames".Below is my code for the pipeline that I have used: Note:ctis justColumnTransformerusingSimpleImputerandOneHotEncoderfor categorical columns andSimpleImputerandStandardScalerfor numerical columnsml_step_1 = ('transform', ct) ml_step_2 = ('pca', PCA()) xgb = ('xgb', XGBRegressor()) xgb_pipe = Pipeline([ml_step_1, ml_step_2, xgb]) xgb = RandomizedSearchCV(xgb_pipe, xgb_param_grid, cv=kf, scoring='neg_mean_absolute_error'); xgb.fit(train_full_features, train_full_target);Running the following pipeline, here is the best estimator that I got:Best XGBoost parameters: {'xgb__silent': True, 'xgb__n_estimators': 1000, 'xgb__max_depth': 4, 'xgb__learning_rate': 0.09999999999999999, 'transform__num__imputer__strategy': 'median', 'transform__cat__imputer__strategy': 'most_frequent', 'pca__n_components': 68}Now, I called this best estimator and did the following:test_full_imp = pd.DataFrame(xgb.best_estimator_.named_steps['transform'].transform(test_full)) test_final = xgb.best_estimator_.named_steps['pca'].transform(test_full_imp) predictions = xgb.best_estimator_.predict(test_final)
How to use best estimator from pipeline to predict test set?
As there were pressure of time and no answer given I was forced to use the workaround, so based onthis answerI have prepared following request(s):curl http://teamcity:8111/app/rest/buildQueue/project:<projectId>,count:[1-n] \ -X POST -u *** \ -H 'Content-Type: application/json' -d '{ "buildCancelRequest": { "comment": "Multiple builds will be cancelled.", "readdIntoQueue": "false" } }'You only need to replacenwith the number of builds you have in the selected project to cancel them all. It basically sending multiple request which implies in stopping all the queued builds.However, if you want to stop already running builds you need to hit different end-point:curl http://teamcity:8111/app/rest/builds/project:<projectId>,running:true,count:[1-n] \ -X POST -u *** \ -H 'Content-Type: application/json' -d '{ "buildCancelRequest": { "comment": "Already running builds will be stopped.", "readdIntoQueue": "false" } }'If you know that there will be only one build running per project then you can skipcount:[1-n]locator and only one request will be sent which will stop currently running build within the selected project.
Hello DevOps Evangelists!First of all, thanks tothis answer, I was able to successfully cancel single TeamCity build using following curl:curl http://teamcity:8111/app/rest/buildQueue/buildType:<buildId> \ -X POST -u *** \ -H 'Content-Type: application/json' -d '{ "buildCancelRequest": { "comment": "This build was cancelled.", "readdIntoQueue": "false" } }'However, my idea was to cancel multiple builds within the particular project via TeamCity REST API. I have tried the following:curl http://teamcity:8111/app/rest/buildQueue/project:<projectId>,count:<n> \ -X POST -u *** \ -H 'Content-Type: application/json' -d '{ "buildCancelRequest": { "comment": "Only one build was cancelled.", "readdIntoQueue": "false" } }'Unfortunately, I have failed miserably, because only single build from this project was cancelled. I know I can send this request as many times as there are builds in the project, but this ugly workaround! I want to do it right! Could some tell me pleasehow to cancel all the builds within the projectby using TeamCity REST API?
TeamCity - How to cancel multiple builds via REST API
You are almost there. From the task paramsdocumentation:params: {string: string}Optional. A key-value mapping of values that are exposed to the task via environment variables.Use this to provide things like credentials to a task.So your shell script becomesci/scripts/build.sh -u ${u} -p ${p}
I am pretty much new and learning the ropes of setting up and running a concourse CI/CD pipeline. One of my build tasks requires credentials stored in the concourse credential manager. They need to be passed as parameters to my shell script. How do I pass them as arguments?My shell script runs as :ci/scripts/build.sh -u username -p passwordusernameandpasswordare to be picked up from the credentials managermy concourse pipeline task is set up as- task: build config: platform: linux image_resource: source: repository: java tag: "8" type: docker-image inputs: - name: resource-repo outputs: - name: artifacts run: path: ci/scripts/build.sh params: u: ((artifactory_user)) p: ((artifactory_password))This doesn't seem to work. I guess there must be a better way to do it.
Concourse CI/CD - passing concourse credentials as shell script parameters
your variable is trainX_M and not trainX_M0 , cahgne totrainX_M0[j] = mldata_pd[(mldata_pd.sample_id == i) & (mldata_pd.training_set_band == j)].drop(['conv_gv_band','sample_id','training_set_band'], axis=1)or crete a list trainX_M and df append to it all of the matrixes per class
I am trying to automate a training pipeline and having some troubles to rename the input dataframes over changing model classes.sample_train = [9] sample_test = [18] model_class = [0] for i in sample_train: for j in model_class: # Define the training datasets -Filter the datasets with model selections trainX_M[j] = mldata_pd[(mldata_pd.sample_id == i) & (mldata_pd.training_set_band == j)].drop( ['conv_gv_band', 'sample_id', 'training_set_band'], axis=1) trainy_M[j] = mldata_pd[(mldata_pd.sample_id == i) & ( mldata_pd.training_set_band == j)].iloc[:, mldata_pd.columns == 'conv_gv_band'] trainX_M0, testX, trainy_M0, testy = train_test_split(trainX_M0, trainy_M0, test_size=0.2, random_state=42)I expect to havetrainX_M0whenmodel_class=0but receive the error:NameError: name 'trainX_M' is not defined
looping a training model over model classes and changing dataframe name by model class
That's a positively ancient version of Perl. The first one I used was 5.004 in the mid-90s. Granted, things may have changed, but I think that the following quotation from thesystem()documentation is still relevant.If there is only one scalar argument, the argument is checked for shell metacharacters, and if there are any, the entire argument is passed to the system's command shell for parsing (this is "/bin/sh -c" on Unix platforms, but varies on other platforms). If there are no shell metacharacters in the argument, it is split into words and passed directly to "execvp", which is more efficient.It seems likely that you are experiencing the difference between these two cases, and that the (external) shell program is imposing the length limit when you add the pipe.You could work around this by temporarily re-opening STDOUT as "| tee -a $log_file", then just executingsystem($cmd). I daren't try to tell you how exactly to do that (particularly if you want to get your old STDOUT back afterwards) under such an ancient version of Perl, because that most certainly has changed over time. You'll need to consult the documentation at hand.One possibility is tofork(), then in the child processopen STDOUT, "| tee -a $log_file", andexec($cmd), and in the parent processwait()for all that to finish, bearing in mind that there is more than one child process because you opened a pipe.
I have a perl code with a very long command and I want print the output to a log file. It looks like this:system ( "$cmd | tee -a $log_file" );When I run the Perl script, the system call throws "The command line is too long".If I run the system call without pipeline, it works.So my questions are:Is there a limit of characters for pipeline?How can I fix the problem?Some additional informations:The command$cmdhas a length of 8532. I am using Perl version 5.003_07 (yes, i know it's old. My company is the owner.).
How do i fix the error 'The command line is too long' if i want to have the output from a command?
You can use alreadyexisting CI variablesto do something like this to retrieve the list of changed files:git diff --name-only $CI_COMMIT_BEFORE_SHA $CI_COMMIT_SHACI_BUILD_BEFORE_SHA and CI_BUILD_REF if you arerunning on Gitlab 8.x
During the gitlab pipeline (triggered after each commit on my branch), I want to know which files are concerned by the commit in order to apply specific bash script regarding each file. I'm currently using the following code in mygitlabci.yamlfile:- export DIFF=$(git show --stat HEAD) - ./myBashScript.shThen I'm using$DIFFin my bash script. But is there a better approach? (I'm using a local gitlab 10.8)
How can I know the updated file during the gitlabci pipeline
The order of the operations in the pipeline, as determined insteps, matters; from thedocs:steps :listList of (name, transform) tuples (implementing fit/transform) that are chained, in the order in which they are chained, with the last object an estimator.The error is due to addingSelectKBestas thelastelement of your pipeline:step = [('scaler', s), ('kn', clf), ('sel',sel)]which is not an estimator (it is a transformer), as well as to your intermediate stepknnot being a transformer.I guess you don't really want to perform feature selectionafteryou have fitted the model...Change it to:step = [('scaler', s), ('sel', sel), ('kn', clf)]and you should be fine.
I am trying to solve a problem where I use KNN algorithm for classification. While using pipeline, I decided to addSelectKBestbut I get the error below :All intermediate steps should be transformers and implement fit and transform.I don't know if I can use this selection algorithm with KNN. But I tried with SVM as well and got the same result. Here is my code :sel = SelectKBest('chi2',k = 3) clf = kn() s = ss() step = [('scaler', s), ('kn', clf), ('sel',sel)] pipeline = Pipeline(step) parameter = {'kn__n_neighbors':range(1,40,1), 'kn__weights':['uniform','distance'], 'kn__p':[1,2] } kfold = StratifiedKFold(n_splits=5, random_state=0) grid = GridSearchCV(pipeline, param_grid = parameter, cv=kfold, scoring = 'accuracy', n_jobs = -1) grid.fit(x_train, y_train)
Problem with SelectKBest method in pipeline
slide() { local -a content local line prefixed_line cut_line readarray -t content || return # read our stdin into an array for ((prefix=0; prefix<=COLUMNS; prefix++)); do # loop increasing # of spaces for line in "${content[@]}"; do # for lines in our input array... printf -v prefixed_line "%${prefix}s%s" '' "$line" # first add spaces in front cut_line=${prefixed_line:0:$COLUMNS} # then trim to fit on one line printf '%s\n' "$cut_line" # finally, print our trimmed line done tput cuu "${#content[@]}" # move the cursor back up done }Used as:toilet -t -f ivrit 'rob93c' | lolcat | slide...or, to allow someone without all those tools installed to test:printf '%s\n' 'one' ' two' ' three' | slide
I usedtoiletto write a simple pipeline to print out my username everytime I open the console and I wanted to be able to get it to slide consistently since the printed word is a few rows tall.toilet -t -f ivrit 'rob93c' | lolcatScript outputI tried to use this script to make it shift but I'm clearly missing something since it doesn't movewhile true; do echo ' ' && toilet -t -f ivrit 'rob93c' | lolcat sleep 1 done
How can I make a shell pipeline print and slide a word from left to right in the console?
To run multiple commands in docker, use/bin/bash -cand semicolon;or alternatively you can use can also pipe commands inside Docker containerYou can try something like this:docker run --rm $CONTAINER_IMAGE:test /bin/bash -c "RAILS_ENV=test && rails test RAILS_ENV=test"
After I want to test my rails code automatic after pushing it to master my test's not running.my stage to test my code looks like this:test_code: stage: test_code script: - docker pull $CONTAINER_IMAGE:test || true - docker build -f Dockerfile.test --cache-from $CONTAINER_IMAGE:test --tag $CONTAINER_IMAGE:test . - docker run --rm $CONTAINER_IMAGE:test rails db:migrate RAILS_ENV=test && rails test RAILS_ENV=test - docker push $CONTAINER_IMAGE:testthis is the output from the pipeline:$ docker run --rm $CONTAINER_IMAGE:test rails db:migrate RAILS_ENV=test && bundle exec rails test == 20181005152311 CreateUsers: migrating ====================================== -- create_table(:users) -> 0.0014s == 20181005152311 CreateUsers: migrated (0.0014s) ============================= /bin/sh: eval: line 86: rails: not found ERROR: Job failed: exit code 127I don't understand why the command after the "&&" isn't recognized.
rails tests not working in gitlab pipeline
It seems that the output by printing the pipeline directly is truncated, and doesn't show the whole output. For example, the arguments shuffle, tol, validation_fraction, verbose, and warm_start belongs to theSGDClassifier.As you have found yourself in the comments, to avoid truncation, you can print the steps directly usingpipeline.steps.
I try to create SVM classifier for short texts with TfIdf as first step. When I create Pipeline, fir it and get accuracy scores - everything looks right.vectorizer = TfidfVectorizer(analyzer='word', ngram_range=(1,4), max_features=50000, max_df=0.5, use_idf=True, norm='l2') classifier = SGDClassifier(loss='hinge', max_iter=50, alpha=1e-05, penalty='l2') pipe = Pipeline(steps=[('tfidf', vectorizer), ('clf', classifier)]) pipe.fit(X_train, y_train)But when I load created model and print it I get only one step - TfIdf instead of two - TfIdf and SVM.print(pipe) Pipeline(memory=None, steps=[('tfidf', TfidfVectorizer(analyzer='word', binary=False, decode_error='strict', dtype=<class 'numpy.float64'>, encoding='utf-8', input='content', lowercase=True, max_df=0.5, max_features=50000, min_df=1, ngram_range=(1, 4), norm='l2', preprocessor=None, smooth_idf=True...m_state=None, shuffle=True, tol=None, validation_fraction=0.1, verbose=0, warm_start=False))])I assume that I don't understand how the Pipeline works exactly but in every example that I saw there were as much steps as it was loaded in the Pipeline at first.Thank you for the help!
Python Pipeline shows only one step
The code needs to add(F(n-1) + F(n-2))before multiplyingF(n) · (F(n-1) + F(n-2)). SinceF(n-2)doesn't need to be saved, you could add the register withF(n-1)to the register withF(n-2), so that the sum ends up in the register that was holdingF(n-2).Trivia:F(0) = 0, sinceF(n-2) = (F(n+1) - (F(n) · F(n-1)))/F(n). You can also calculateF(-1) = 1), but notF(-2)since it that ends up as1/0.
This is a Fibonacci sequence that I recently attempted to turn into a assembly code through the use of instruction set. I am not sure how to go about testing it and was wondering could confirm if I got this right and if not where I went wrong. Also the "." that is used does this mean I must multiply using the instruction set. Below is the question I got and my answer I have come up with. I would also like to know if I have used the correct #.
with this Fibonacci sequence question with instruction sets used to make a assembly code
Found the issue.I passed all environment variables into the container -> on linux I passed variables such asPATHand destroy auto-finding the correctbinfolder.If I call it as/usr/local/bin/awsit works on both system. After passing only relevant environment variables then theawsworks out of the box.
I have docker container, I run it and after some time it has to execute this line$(aws ecr get-login --region $AWS_DEFAULT_REGION | sed -e 's/-e none//g')Now the weird thing is - when I run it on my local machine (Windows) it passes and writes theLogin SucceededWhen I run it on the Linux-Ami agent, everything works correctly, but when it gets to this line it outputs/app/ops/release/docker-run.sh: 51: /app/ops/release/docker-run.sh: aws: not foundI am confused as I am using docker to actually have the same environment no matter when I execute it. The only non-docker part is when I build the image and run it (and in that part I understand if there are some differences), but then everything else runs in container based on same Dockerfile in both environments.The only real difference can be the environment variables that are passed into container on start.Any idea?Part of the Dockerfile for building image for this container isRUN pip install --upgrade awscli
Docker container - different behaviour on Win and Linux
I found the solution already- task: qetza.replacetokens.replacetokens-task.replacetokens@3 displayName: 'Replace buildNumber in appsetting' inputs: targetFiles: | src/Project.WebApp/appsettings.json src/Project.WebApi/appsettings.json actionOnMissing: fail
I'm try to replace token to both of appsettings.json but I can't find the syntax to add multiple tragetFilesI've tried this but it doesn't replace.- task: qetza.replacetokens.replacetokens-task.replacetokens@3 displayName: 'Replace buildNumber in appsetting' inputs: targetFiles: -src/Project.WebApp/appsettings.json -src/Project.WebApi/appsettings.json actionOnMissing: fail
I'd like to replacetokens 2 files on yml step
There is nocontains. If you need it to run on any agent, then delete the demand when queuing the build. Or the opposite, add the demand when queuing the build if the default behavior should be running on any agent.
I am setting up a build pipeline for one of my company's projects, where we need to be able to specify in the variables which build agent it should be run on. The problem is that we need the build to run on any available agent if no agent is specified, but vso only seems to have the-equalsand not-containswhich I believe we will need to accomplish this.I've tried looking through the documentation, but have not been able to find any documentation except for the list of functions foundhere- but which only seems to work for setting up conditions.This is what I have tried:pool: name: pool demands: - Agent.Name -equals $(RunOn)The expected result is that it runs on any available agent if no agent is specified, and runs on a specific agent if it is.
How can i use a Contains() in a YAML vso-pipeline demand?
This has nothing to do with Git and everything to do with yourlftpcommand. When you're uploading data withlftp, you're using the-e(--delete) option tomirror, which the man page specifies as “delete files not present at the source”. Since your upload files aren't present on the source, they're deleted.If you want to continue to use this option, you need to move your uploads folder out of the same root and into a separate directory; otherwise, you should stop using this option.
I am usingGitandGitlab. UsingGitlab CI pipelines(.gitlab-ci.ymlfile) I am trying to deploy my project to a shared host.It works fine, but when pipeline runs, it removes user attachments, profile pictures, documents and all files that users upload directly to host and those files don't exist in git master branch.What should I do to make Gitlab ignore myuploadsfolder and all sub-folders in pipelines?Here we have.gitignorefile:.idea files/And this one is my.gitlab-ci.ymlfile:deploy: script: - apt-get update -qq && apt-get install -y -qq lftp - lftp -c "set ftp:ssl-allow no; open -u "USER","PASSWORD" "HOST"; mirror -Rnev ./ ./public_html --ignore-time --parallel=10 --exclude-glob .git* --exclude .git/" only: - master
Gitlab CI removes user attachments and uploaded files
Bash hasset -o pipefailwhich uses the first non-zero exit code (if any) as the exit code of a pipeline.POSIX shell doesn't have such a feature AFAIK. You could work around that with a different approach:tail -F -n0 real_work.log & do_real_work > real_work.log 2>&1 kill $!That is, start following the as yet non-existing file before running the command, and kill the process after running the command.
In my script I need to work with the exit status of thenon-lastcommand of a pipeline:do_real_work 2>&1 | tee real_work.logTo my surprise,$?contains the exit code of thetee. Indeed, the following command:false 2>&1 | tee /dev/null ; echo $?outputs 0. Surprise, because the csh's (almost) equivalentfalse |& tee /dev/null ; echo $statusprints 1.How do I get the exit code of thenon-lastcommand of the most recent pipeline?
How to detect an error at the beginning of a pipeline?
Such information flow is not supported. To create aTranformerthat can be used with both Python and Scala code base you have:Implement Java or ScalaTransformer, in your case extendingorg.apache.spark.ml.feature.SQLTransformer.Add Python wrapper extendingpyspark.sql.ml.wrapper.JavaTransformerthe same way aspyspark.sql.ml.feature.SQLTransformerand interface JVM counterpart from it.
I wrote a customSQLTransformerin PySpark. And setting a default SQL statement is mandatory to have the code being executed. I can save the custum transformer within Python, load it and execute it using Scala or/and Python but only the default statement is executed despite the fact that there is something else in the_transformmethod. I have the same result for both languages, then the problem is not related to_to_javamethod orJavaTransformerclass.class filter(SQLTransformer): def __init__(self): super(filter, self).__init__() self._setDefault(statement = "select text, label from __THIS__") def _transform(self, df): df = df.filter(df.id > 23) return df
Why PySpark execute only the default statement in my custom `SQLTransformer`
Currently, you cannot modify the Solr response. All you can do is add to it. So you could add a new block of JSON, include the "id" of the item and then list the fields and values you want to use in your UI.Otherwise, you need to make the change in your Index Pipeline (as long as the value doesn't need to change based on the query).
How can Itransform solr responseusingJavaScript query Pipelinein Lucidworks Fusion 4.1? For example I have the following response:[ { "doc_type":"type1", "publicationDate":"2018/10/10", "sortDate":"2017/9/9"}, { "doc_type":"type2", "publicationDate":"2018/5/5", "sortDate":"2017/12/12"}]And I need to change it with the following conditions:Ifdoc_type = type1then putsortDateinpublicationDateand removesortDate; else only removesortDateHow can I manipulate with response? There is no documentation in official website
Lucidworks Fusion 4.1 transform result documents using Javascript query pipeline
You have to configure Bitbucket artifacts. You can follow this guide (written for github, but is mainly the same), to setup the artifact typehttps://www.spinnaker.io/setup/artifacts/github/
So, notification seems to be working on push event from bit bucket as it returning 200 status. Now on spinnaker (1.9.5) deck side I am configuring trigger and expected artifact in this way:hal config artifact bitbucket account add spinnaker-bitbucket-cloud hal config artifact bitbucket account delete spinnaker-bitbucket-cloud hal config artifact bitbucket account add spinnaker-bitbucket-cloud --username mybitbucketuser --passwordExpected ArtifactAutomated TrigerDeploy Manifest
Not being able to configure Spinnaker bitbucket pipeline
Create a docker image that holds all three binaries and a wrapper script to run all three.Then deploy a KubernetesCronJobthat runs all three sequentially (using the wrapper script as entrypoint/command), with the appropriate schedule.For debugging you can then just run the the same image manually:kubectl -n XXX run debug -it --rm --image=<image> -- /bin/sh $ ./b ...
I have a data pipeline in Go with steps A, B and C. Currently those are three binaries. They share the same database but write to different tables. When developing locally, I have been just running./a && ./b && ./c. I'm looking to deploy this pipeline to our Kubernetes cluster.I want A -> B -> C to run once a day, but sometimes (for debugging etc.) I may just want to manually run A or B or C in isolation.Is there a simple way of achieving this in Kubernetes?I haven't found many resources on this, so maybe that demonstrates an issue with my application's design?
How do I run a multi-step cron job, but still make it able to execute a single step manually?
IoT Analytics doesn't support hyphens '-' in attribute names. If you want to use a separator, try using an underscore instead '_' and that should resolve your problem.
I got this error from AWS IoT Analytics service after message is Transform in lambda: my lambda get as input a json format string{ "id": "223", "data": "valid-timestamp,1,2,3,4,5" }The data key holds my IoT data values on a specific timespanThe lambda parse the above input and return array of dict:[ { "id": "1", "timestamp": "valid-timestamp1", "value-1": "1", "value-2": "2", "value-3": "3" }, { "id": "1", "timestamp": "valid-timestamp1", "value-1": "1", "value-2": "2", "value-3": "3" } ]I did not succeeded to create a my_data_store I would be happy if someone can assist. Thanks
Not storing this message in Datastore, missingAttributeNames
Scenario 1 - 24/7 pipelineThe processes in pipeline must be running always. Scheduler is not a right choice for it, as processes are not being scheduled here but processes should be monitored and restarted if these die. The flume agents and spark streaming driver running as a client should be executed through systemd. Systemd will take care of restarting flume agent or spark streaming driver dies. If spark streaming driver is running in cluster mode, run it with supervisor flag on and you will not need systemd unit for it.Scenario 2 - 8 AM to 8 PMIf you have systemd unit for both flume agent and spark streaming driver in client mode, two scripts could be written, one for starting these processes and other for stopping these processes. You can schedule start processes script at 8 AM either using oozie or crontab and schedule stop processes script at 8 PM.
I have a batch processing data pipeline on a Cloudera Hadoop platform - files being processed via Flume and Spark into Hive. The orchestration is done via Oozie workflows.I'm now building a near-real-time data pipeline using Flume, Kafka, Spark Streaming and finally into HBase. There are 2 scenarios in terms of orchestration :Keep the pipeline on 24/7 - What should be the orchestration (scheduling) mechanism? Oozie?Operate the pipeline between 8 am and 8 pm - What should be the orchestration (scheduling) mechanism? Oozie?Please describe your experiences from real-life production implementations.
How to schedule a real time data pipeline (flume, kafka, spark streaming)?
If you want to send only one message on the GetNext, you have to call on Disassemble method to the base Disassemble and get all the messages (you can enqueue this messages to manage them on GetNext) as:public new void Disassemble(IPipelineContext pContext, IBaseMessage pInMsg) { try { base.Disassemble(pContext, pInMsg); IBaseMessage message = base.GetNext(pContext); while (message != null) { // Only store one message if (this.messagesCount == 0) { // _message is a Queue<IBaseMessage> this._messages.Enqueue(message); this.messagesCount++; } message = base.GetNext(pContext); } } catch (Exception ex) { // Manage errors }Then on GetNext method, you have the queue and you can return whatever you want:public new IBaseMessage GetNext(IPipelineContext pContext) { return _messages.Dequeue(); }
I have a requirement where I will be receiving a batch of records. I have to disassemble and insert the data into DB which I have completed. But I don't want any message to come out of the pipeline except the last custom made message.I have extendedFFDasmand calledDisassembler(), then we haveGetNext()which is returning every debatched message out and they are failing as there is subscribers. I want to send nothing out fromGetNext()until Last message.Please help if anyone have already implemented this requirement. Thanks!
In Disassembler pipeline component - Send only last message out from GetNext() method
The splitTests is a Groovy method that comes from theParallel Test Executor Pluginfor Jenkins.https://wiki.jenkins.io/display/JENKINS/Parallel+Test+Executor+PluginIn Groovy, you don't have to use parenthesis for method calls, but you can write the same line as this:def splits = splitTests( parallelism: [$class: 'CountDrivenParallelism', size: 3], generateInclusions: true)Where the parameters for the method is a Map, with the 3 keys: parallelism, size and generateInclusions.$Class 'CountDrivenParallelism'Tells the plugin which implementation for parallelizing the tests should be used.def Groups = [:]Defines a new local variable named Groups, and initializes it with a new HashMap. The [:] is short for Map in Groovy.See fx. this article, that describes the code you have posted:https://jenkins.io/blog/2016/06/16/parallel-test-executor-plugin/and what it does
I am using Declarative Pipeline in Jenkins and i have more than 200 tests. I want to split them to many machines. i have a piece of code that i have to repair but i dont konw how. The documentation is not so good. Can someone explain me what is happening in these lines of code ?def splits = splitTests parallelism: [$class: 'CountDrivenParallelism', size: 3], generateInclusions: true def Groups = [:] for (int i = 0; i < splits.size(); i++) { def split = splits[i] Groups["split-${i}"]splitTests is a language fonction, but parallelism ?$Class 'CountDrivenParallelism', here he created a class ?What is Groups or this operator [:]
Jenkins Declarative Pipeline
If you are running builds and releases with hosted pipelines, which means the build running on machines managed by Microsoft, then you are actually using Microsoft-hosted CI/CD.This uses ourpool of Microsoft-hosted agentsto run your builds, which do have some limitations such as below:The ability to log on.The ability to drop artifacts to a UNC file share.The ability to run XAML builds.Potential performance advantages that you might get by using self-hosted agents which might start and process builds faster. Learn moreIt's not able to select a non root user for now, take a look at this similar thread:Improve linux based agents to run under a vsts:vsts id instead of root
I have some unit tests that test file accessibility, these are failing on Linux because it's running as the root user and the root user can write to read only files. Is it possible (preferably through yaml) to specify that some or all of the pipeline run as a non-root user?
Is it possible to run all or part of a Hosted Linux VSTS Pipeline as a non-root user?
It depends on the ammount of rows you are trying to copy. If you just need a few tables/rows, you may try with Azure Automation. That way you can just create a runbook with Powershell that connects to Azure Sql Server, queries the server and then sends that data to the Azure MySql Server. Then you can call the runbook from data factory using a webhook :)If you end up going this route, remember that runbooks have a limitation and cannot run for more than 3 hours. More information here:https://learn.microsoft.com/en-us/azure/automation/automation-runbook-execution#fair-shareAnother option would be to create a custom activity for data factory. For this you need an Azure Batch pool. More here:https://learn.microsoft.com/en-us/azure/data-factory/transform-data-using-dotnet-custom-activityHope this helped!
I'm aware that Azure Pipeline's Copy Data Activity does not support MySQL as sink. But is there any workaround via some other component to do so?
Azure pipeline copy data activity for copying data from Azure MSSQL to Azure MySQL
You are adding the script to the same pipeline you added theGet-MyParametercommand to. In effect, you are doingget-myparameter | { … your script … }Try using separate pipelines instead.var result1 = powerShellInstance.AddCommand("Get-MyParameter").Invoke() var result2 = powerShellInstance.AddScript(psScript).Invoke();Also, you can simplify your module loading code topowerShellInstance.AddCommand("Import-Module"). AddParameter("Name", @"..\CommandLets\bin\Debug\Commandlets.dll")). Invoke();
I need to execute Powershell script containing my custom Commandlets ( exist in different assembly) from C#. I tried following approach but it just invokes the commandlet only once and that's it, while in the script that commandlet is written more than once.Get-MyParameter myTasks Start-Process notepad.exe Get-MyParameter myTasks Get-MyParameter myTasks Get-MyParameter myTasksWhile, MyParameter is written in different assembly. Tried Code is :var powerShellInstance = PowerShell.Create(); powerShellInstance.Runspace = runSpace; var command = new Command("Import-Module"); command.Parameters.Add("Assembly", Assembly.LoadFrom(@"..\CommandLets\bin\Debug\Commandlets.dll")); powerShellInstance.Commands.AddCommand(command); powerShellInstance.Invoke(); powerShellInstance.Commands.Clear(); powerShellInstance.Commands.AddCommand(new Command("Get-MyParameter")); powerShellInstance.AddScript(psScript); var result = powerShellInstance.Invoke();What am I doing wrong here?
Executing Powershell script containing Commandlets
I finally solved it pressing the "Clear Runner Caches" button and running it again.
I'm getting this error after pipeline runs:Preparing to unpack .../git-ftp_1.3.1-1_all.deb ... Unpacking git-ftp (1.3.1-1) ... Setting up libcurl3:amd64 (7.52.1-5+deb9u6) ... Processing triggers for libc-bin (2.24-11+deb9u3) ... Setting up curl (7.52.1-5+deb9u6) ... Setting up git-ftp (1.3.1-1) ... $ git ftp push --user $FTP_USERNAME --passwd $FTP_PASSWORD sftp://$FTP_HOST fatal: Remote host not set. ERROR: Job failed: exit code 1This is my .yml config:image: samueldebruyn/debian-git stage_deploy: only: - develop script: - apt-get update - apt-get -qq install git-ftp - git ftp push --user $FTP_USERNAME --passwd $FTP_PASSWORD sftp://$FTP_HOSTA month ago it worked fine. The values of the variables are correct..Any ideas?
git-ftp fatal: Remote host not set
So you just want to loop over both collections, something like the following should do the trick$promos = Get-Content \\path\Promo_Collection.csv $descriptions = Get-Content \\path\Description_Collection.csv while ($promos) { $p, $promos = $promos $d, $descriptions = $descriptions "Promo: $p Description: $d" | Out-File -Width 4096 \\path\Test_$p.txt -encoding ASCII }
I'm looking to source variables from two different lists, and match them to a text out-file. They will always be associated, line for line.$promos = Get-Content \\path\Promo_Collection.csv $descriptions = Get-Content \\path\Description_Collection.csv foreach ( $promo in $promos ) { foreach ( $description in $descriptions ) { "Promo: " + $promo + " " + "Description: " + $description | Out-File -Width 4096 \\path\Test_$promo.txt -encoding ASCII } }The output I get is the following:Test_promo1.txtPromo: Promo1 Description: Description1Test_promo2.txtPromo: Promo2 Description: Description1and so on... I've tried several ways (breaks, tags) to iterate the 2nd foreach loop, and keep it in the same pipeline as the 1st foreach loop without success. Any assistance is much appreciated.
Nested Foreach loops using 2 lists in same pipeline
You can use_resetUidmethod to rename each transformer/estimator:vecAssembler1 = VectorAssembler(inputCols = ["P1", "P2"], outputCol="features1") vecAssembler1._resetUid("VectorAssembler for predicting L1")By default it uses java's UID random generator.
I'm starting to create more complex ML pipelines and using the same type of pipeline stage multiple times. Is there a way to set the name of stages so that someone else can easily interrogate the pipeline that is saved and find out what is going on? e.g.vecAssembler1 = VectorAssembler(inputCols = ["P1", "P2"], outputCol="features1") vecAssembler2 = VectorAssembler(inputCols = ["P3", "P4"], outputCol="features2") lr_1 = LogisticRegression(labelCol = "L1") lr_2 = LogisticRegression(labelCol = "L2") pipeline = Pipeline(stages=[vecAssembler1, vecAssembler2, lr_1, lr_2]) print pipeline.stagesthis will return something like this:[VectorAssembler_4205a9d090177e9c54ba, VectorAssembler_42b8aa29277b380a8513, LogisticRegression_42d78f81ae072747f88d, LogisticRegression_4d4dae2729edc37dc1f3]but what I would like is to do something like:pipeline = Pipeline(stages=[vecAssembler1, vecAssembler2, lr_1, lr_2], names=["VectorAssembler for predicting L1","VectorAssembler for predicting L1","LogisticRegression for L1","LogisticRegression for L2")so that a saved pipeline model can be loaded by a third party and they will get nice descriptions:print pipeline.stages # [VectorAssembler for predicting L1,VectorAssembler for predicting L2,LogisticRegression for L1,LogisticRegression for L2]
Can I set stage names in spark ML Pipelines?
Put the Atomic Scope (required to execute the PipelineManager) in aLong Running Scope(and Orchestration) with aException Handler. You don't need the Compensation Block at all.You should be able to catch the XLANGPipelineManagerException directly, or just Exception.
I'm doing a proof of concept on catching validation error in an orchestration. Eventually, we might want to map them back to a response message.I created a expression shape that calls a Receive Pipeline with Validation (as desribed here:https://learn.microsoft.com/en-us/biztalk/core/how-to-use-expressions-to-execute-pipelines).It's in an atomic scope, which has a Compenation handler, but no Exception handler. The pipeline blew up on validation, and ended the orchestration. How can I capture this and look at the data it generates? Eventually, I will try this component which catches multiple exceptions:rcvPipelineOutputMsgs1 = Microsoft.XLANGs.Pipeline.XLANGPipelineManager.ExecuteReceivePipeline (typeof(Myapp.Pipelines.ValidateAtlastRequestPipeline), msg_In);The error was written to the eventlog. The data is wrong, and I want to get an error, but I want to catch it.Shape name: Call Validation ShapeId: efe2529a-acaa-416b-ad8e-c3faef9624c5 Exception thrown from: segment 2, progress 3 Inner exception: There was a failure executing pipeline "Myapp.Pipelines.ValidateAtlastRequestPipeline". Error details: "The XML Validator failed to validate. Details: The element 'LtlTenderRequest' has invalid child element 'DocumentName'. List of possible elements expected: 'Tenders'.. ". Exception type: XLANGPipelineManagerException
Catching Exception in Pipeline Validation in BizTalk Orch
You don't know how long the time may take between initial GET and subsequent POST; it may never come. I don't think you want to keep the GET open. Instead, make the get, store a token on the server side (HTTPCache or somewhere), and respond to the GET. What you store on the server side must be enough to map the subsequent POST to the GET (a user ID or something).Then, on the client, set up a polling GET on a timer (once a second for 20 seconds, or whatever). Respond with a "processing" message or similar until the POST has been received.Then IoT makes POST request, controller catches it, updates the HTTPCache object to change status from "processing" to "complete" or whatever, and stuffs the desired payload into the cache object.On the next polling GET, the cache object can respond with "complete" and the payload.
I got very unusual (unusual from MVC point of view) requirement. I want to copy the content of one HTTP Post (by content I mean attached file) to another HTTP Get request. This is what should happen:1- First request (GET) comes form user browser at controller, I need to keep this request open and wait for next one.2- Second request (POST) comes after that, from an IoT device including a file.3- I need to echo content of attachment from second request to first request and then close both requests.Please note that I can't make second request from server to IoT device and wait for response, I have to wait for that IoT device to connect sever (you can imagine why!).One way is to make a list of waiter instances (event delegation class) for each GET request at Application level and whenever second request (POST) comes I check the ID (or token) of that request with tokens i already kept in that global list, if any of them matched, I can start read from POST request and raise the event inside that waiter instance to send back bytes to event handler.But as is sounds, it is really messy and somehow against how MVC pipeline designed.can anyone suggest better way?
Echo content of one request to another in ASP.Net MVC 5
While there is a -exec command built intofind, it is difficult to use (seeWhy does find -exec mv {} ./target/ + not work ? (on cygwin)).What you are looking for is this pipe command:find . -maxdepth 1 -type d -regex '\./[^.]*$' | cut -c 3-Anytime thefindcommand output something, thecuts happen.
For example, I have a code can find me the directory name in the current folder without . in the front:find . -maxdepth 1 -type d -regex '\./[^.]*$'However, it gives me./Templates ./eclipse-workspace ./Public ./Documents ./VirtualBox VMs ./Videos ./CLionProjects ./jd2I need to dodu -shfor each line of them sequentially, how can I do?
How to pipeline in a line by line fashion in ubuntu
DonNetActivity has been replaced by the Custom Activity in V2, as it has flexibility to run any command which is not limited to DotNet code. If you have a lot of projects which depends on the V1 interface, you might want to implement your own wrapper executable which converts V2 input (JSON files) to V1 interface. A migration tool is also considered in backlog of V2 GA, but no commitment yet.
Recently project changed ADF version from v1 to v2. The pipeline, from legacy workpiece, contains dotnetactivity that runs on v1.(w/IDoNetActivity Interface inhered to class)I was searching if dotnetactivity still available on v2. But official adf v2 documents does not seem to have dotnetactivity on activity list instead customactivity with console app.If no, i may need to modify all dotnetactivity and complete corresponding test again...
Is DotNetActivity still available on ADF V2?
The answer to your question is incomments of the linked answer.Apparentlyps -eis sending the header line first, then not sending anything, then buffering the rest of the output.headmust think the stream is closed after the first line, so it exits, leavinggrepto see the rest.It only works by accident.Is there any way to make it work universally?Everything is possible, but, you may need to recode and recompile everything else. So... not possible, sorry.
I was interested in keeping the header of the input in thegrepoutput just like in the questionInclude header in the 'grep' result.Although a working solution (usingsed) is presented right in the question, theanswer with most upvotesattracted my attention. It allows utilizinggrep --color, for example. It suggests piping the input to acommand groupwith the commandsheadandgrep, e.g.:ps -ef | { head -1; grep python; }However, as written in the linked answer and a comment, this does not work with all commands on the left side of the pipe. It works well withpsorlsofbut does not work fordf,lsand others. When using them, only the output of the first command in the group is printed:$ df | { head -1; grep '/'; } Filesystem 1K-blocks Used Available Use% Mounted on $Could anyone explain why piping to a command group only works for some commands? Is there any way to make it work universally?
Why does a pipe to a command group only work for some commands?
The REST Proxy is just a normal Kafka Producer and Kafka Consumer and can be configured with or without retries enabled, just as any other Kafka Producer can.A single producer publishing via a REST Proxy will not achieve the same throughput as a single native Java Producer. However you can scale up many REST proxies and many HTTP producers to get high performance in aggregate. You can also mitigate the performance penalty imposed by HTTP by batching multiple messages together into a consolidated HTTP request to minimize the number of HTTP round trips on the wire.
We are planning to useConfluent Rest proxyas the platform layer for listening to all user events (& publishing them to Kafka). Working on micro-services model & having varied types of event generators, we want our APIs/event-generators to be decoupled from event listening/handling. Also, at event listening layer, eventresilienceis important for us.From what I understand, if the Rest proxy layer fails to publish to Kafka(once) for any reason, it doesn't retry. This functionality needs to be handled by the caller (the client layer), which needs to make synchronous calls & retry on failure. However, couldn't find any details on this, in the product documentation. Could someone please confirm the same?Confluent Rest Proxy developers claim that with the right Rest Proxy cluster set-up & right request batching by the client, performance as good as native producers' can be achieved. Any exceptions/(positive/negative)thoughts here?Calls to the Rest Proxy Producer API are blocking. If the client doesn't need to know the partition & offset details, can we configure these calls to be non-blocking in anyway, such that once the request is received, resilience is managed by the Rest Proxy layer itself. The client just receives a 200 HTTP status as acknowledgement, whenever a produce msg request is received.
Does Confluent Rest Proxy API (producer) retry event publish on failure
If by “redirect” you mean “transform an HTTP success into a Siesta error,” then yes, this is possible. The pipeline can arbitrarily transform successes into errors and vice versa.Write aResponseTransformerthat unwraps the.successcase, checks whether the error flags are set (whatever they are), and if so returns a newly constructed.failure.For example, here is a sketch of a transformer that checks for anX-My-API-Errorheader on a 200, and if it is present returns an error:struct APIErrorHandler: ResponseTransformer { func process(_ response: Response) -> Response { switch response { case .success(let entity): if let errorMessage = entity.header(forKey: "X-My-API-Error") { return logTransformation( .failure(Error(userMessage: errorMessage, cause: MyAPIError(), entity: entity))) } return response case .failure: return response // no change } } }Configure it as follows:service.configure { $0.pipeline[.cleanup].add(APIErrorHandler()) }You might also find it helpful to study the transformer from the example project thatturn a 404 into a successful response withfalsefor the content, and the one thatoverrides Siesta’s default error message with a server-provided one.
Is possible using Siesta pipeline, receive a success response, parsing it, and depending of the return, redirect it to a failure response?My server response many times return a HTTP 200, but with a error message/flag.
Swift Siesta redirect response to failure
I think you can use taskflow here's ataskflow docit can add stage easily and don't worry about the parameters communication
I want to create 3 processes(a, b, c) and assemble them in a workflow.ex: a---->b b---->c c---->endoutput of a will send to input of b.I tried ruffus but it designed for file transformation.Is there workflow library in python?
Is there any workflow tool or library in python?
Keep in mind that each side of a pipe (|) is a separate shell session.In your first attempt you have... | wc -l | read word | if ..., so while the result ofwc -lwill be read and stored in the next shell session's variableword, this variable is not passed to the next shell session. And since the variablewordis now undefined, your statementif [ $word -eq 1 ]becomesif [ -eq -1 ]and hence the error.In your second attempt your{ read word ; if [ $word -eq 1 ]; then cat << TWO ; fi ;}construct represents a single shell session where the variablewordis initialized (with the results ofwc -l), and because this is still a single shell the variable can now be used in yourif/thenblock.
I'm trying to learn Here Documents IO-redirections in Korn Shell. I used "read" builtin to test the concepts.I wrote a sample program as follows#!/usr/bin/ksh cat << ONE | wc -l | read word | if [ $word -eq 1 ]; then cat << TWO ; fi | wc -l | read word | if [ $word -eq 2 ]; then cat << THREE ; fi 1 ONE 1 2 TWO 1 2 3 THREEI expect the output to be:1 2 3But the output is[: argument expectedi think the $word variable is empty.Then i rewrote the program as below which works fine:cat << ONE | wc -l | { read word ; if [ $word -eq 1 ]; then cat << TWO ; fi ;} | wc -l | { read word ; if [ $word -eq 2 ]; then cat << THREE ; fi ;} 1 ONE 1 2 TWO 1 2 3 THREEThe output is as expected:1 2 3My question is about the first snippet:why the "read word" command in the pipeline is not updating the word variable in the shell and the subsequent commands are not able to use it.Also 'read word" doesn't write anything to the standard output sousing it in a pipeline is not right, but i wanted to check it out.But then, the command following the "read word" doesn't read anything from standard input so it should be fine.
KSH, read used in a pipeline
You can actually reference environment variables in shell scripts and in hive scripts.in a shell script to reference $HOT_VAR:echo $HOT_VARin a hive script to reference $HOT_VAR:select * from foo where day >= '${env:HOT_VAR}'i'm not sure if that is an example of a hive script. maybe you want to seehttps://stackoverflow.com/a/12485751/6090676. :)if you are really unable to use environment variables for some reason, you could use command line tools like awk, sed, or perl (why do people always suggest perl instead of ruby?) to search and replace in the files you need to configure (based on environment variables, probably).
I'm working on automating deployment for dev and prod withsqoopjobs that have to be scped onto specific servers for each type. With these jobs, the scripts associated for each sqoop job needs to change based on dev vs prod. Currently, I have a git repo containing a dev and prod folder where approved dev changes are put onto the prod folder but with the variables (references to dev database vs prod database) changed. Then I have two jenkins pipelines that associate with each and have independent triggers. This is incredibly hacky.My current plan is to consolidate into a single folder and replace all the variables with a pseudo variable such as %DBPREFIX% and then having each associated pipeline regex and replace all matches with its associated database prefix on compilation.The files that need to be changed are shell scripts and hive scripts, so I can't just define a environment variable within the Jenkins node shell.Is there a better way to handle this?tl;dr: I need to set variables in different files that can be automatically changed through a jenkins pipeline.
Jenkins pipeline deployment for production and development
Well apart from the fact replacing an image with a text file is a poor idea, this is all you need from the command line.(There is no requirement for a pipe).For /F "EOL=H Tokens=2*" %A In ('REG QUERY "HKCU\Software\Microsoft\Internet Explorer\Desktop\General" /v WallpaperSource') Do @Copy /Y "b.txt" "%~B">NulI would guess that you'll need to change"b.txt"accordingly.EditYour updated command from a batch file:@Echo Off For /F "EOL=H Tokens=2*" %%A In ( 'REG QUERY "HKCU\Software\Microsoft\Internet Explorer\Desktop\General"^ /V WallpaperSource') Do Copy /Y "%%~B" "backup%%~xB">Nul
I want to pipeline to acopyoperation. For example if we look at the command:copy a.txt b.txtI want to get "a.txt" from the former operation that is (as I have tried and failed):echo a.txt| copy b.txtI know this is not the correct syntax and I am failing (after a long google search) to understand what is the correct syntax to pass the output of the former command as the first argument of the second command.How do I pipe this?
Pipeline in CMD. Passing the output of first command as the first argument of the second command
If you output the result on the standard output (e.g. usingConsole.WriteLine()), you will be able to use normal Unix pipes to pipe the result to another application, just as you can with any other console application.
I want to use C# net Core Console Application in Linux Ubuntu. And I plan to generate an HTML text using this application.This text should be passed to a program already existed in the linux and generate a .pdf file as a result.Unfortunately, I couldn't find pipeline operations in C# Core.How to do the best it?
.net core usage for ubuntu linux pipelining
So the problem was not that when the first element of the pipe ended it terminate the second. The real problem was that the two app with pipes launched from bash script and when the bash script ended it terminated all of it child process. I solved it usingsignal(SIGHUP,SIG_IGN);that way my app executed to the end. Thank you for all the answer at least I learned lot about the signals and pipes.
The situation is: I have an external application so I don't have the source code and i can't change it. While running, the application writes logs to the stderr. The task is to write a program that check the output of it and separate some part of the output to other file. My solution is to start the app like./externalApp 2>&1 | myAppthe myApp is a c++ app with the following source:using namespace std; int main () { string str; ofstream A; A.open("A.log"); ofstream B; B.open("B.log"); A << "test start" << endl; int i = 0; while (getline(cin,str)) { if(str.find("asdasd") != string::npos) { A << str << endl; } else { B << str << endl; } ++i; } A << "test end: " << i << " lines" << endl; A.close(); B.close(); return 0; }The externalApp can crash or be terminated. A that moment the myApp gets terminated too and it is don't write the last lines and don't close the files. The file can be 60Gb or larger so saving it and processing it after not a variant.Correction: My problem is that when the externalApp crash it terminate myApp. That mean any code after while block will never run. So the question is: Is there a way to run myApp even after the externalApp closed?How can I do this task correctly? I interesed in any other idea to do this task.
ubuntu server pipeline stop process termination when the first exit
In the loop You have<button class="..." id="pid" name="{{.puid}}" value="{{.puid}}">Update</button>which means that all buttons haveidattribute whith the same valuepid. This is an bug asid-s must be unique in the document. And when you calldocument.getElementById("pid");the first element matching theid="pid"is returned. That explains why "only the ID of the first table row element is getting stored".To create unique id for each row you could use something like{{range $index, $value := .}} ...<button class="..." id="pid{{$index}}" name="{{$value.puid}}" value="{{$value.puid}}">Update</button>... {{end}}but then you have a problem how to know which form was submitted when yoursave_data()event fires. To solve this you could send the current form or row id as a parameter, something like{{range $index, $value := .}} <td><form onsubmit="save_data(this, {{$index}})" action="/" method="get">...</form></td> {{end}} function save_data(form, rowno) { var input = document.getElementById("pid"+rowno); localStorage.setItem("id", input.value); }
I am getting data from the server and iterating over it to create a table and then I am using a form to store an id to local storage using javascript. Here is code snippet<table> <tr><th>Product ID</th></tr> {{range .}} <td ><form onsubmit="save_data()" action="/" method="get"><button class="btn btn-info pid" id="pid" name="{{.puid}}" value="{{.puid}}">Update</button></form></td> {{end}} <table> <script> function save_data() { var input = document.getElementByID("pid"); localStorage.setItem("id", input.value); } </script>However every time, no matter of which table row's "update" button I click, everytime only the ID of the first table row element is getting stored. Is there a way I can generate unique IDs and reference it in Javascript when ranging over the data. Thanks
How to create unique IDs when ranging over data in Golang and use in Javascript
The copy activity doesn't like multiple inputs or outputs. It can only perform a 1 to 1 copy... It won't even change the filename for you in the output dataset, never mind merging files!This is probably intentional so Microsoft can charge you more for additional activities. But let's not digress into that one.I suggest having 1 pipeline copying both files into some sort of Azure storage using separate activities (1 per file). Then have a second down stream pipeline that has a custom activity to read and merge/concatenate the files to produce a single output.Remember that ADF isn't an ETL tool like SSIS. Its just there to invoke other Azure services. Copying is about a complex as it gets.
I have two dataset, one "FileShare" DS1 and another "BlobSource" DS2. I define a pipeline with one copy activity, which needs to copy the files from DS1 to DS3 (BlobSource), with dependency specified as DS2. The activity is specified below:{ "type": "Copy", "typeProperties": { "source": { "type": "FileShare" }, "sink": { "type": "BlobSource" } }, "inputs": [ { "name": "FoodGroupDescriptionsFileSystem" }, { "name": "FoodGroupDescriptionsInputBlob" } ], "outputs": [ { "name": "FoodGroupDescriptionsAzureBlob" } ], "policy": { "timeout": "01:00:00", "concurrency": 1, "executionPriorityOrder": "NewestFirst" }, "scheduler": { "frequency": "Minute", "interval": 15 }, "name": "FoodGroupDescriptions", "description": "#1 Bulk Import FoodGroupDescriptions" }Here, how can i specify multiple source type (both FileShare and BlobSource)? It throws error when i try to pass as list.
Azure data factory specify multiple source type in activity
Currently its not possible to load/apply a template using the "Add Stage" button in the Delivery Pipeline. Some alternative ways to accomplish itHave a template/reference stage in your Delivery Pipeline with all the environment properties. Clone new stages from this reference stage.Export your pipeline (with reference stage) as yaml and than use this pipeline.yaml for creating new pipelines. More information on exporting pipeline as yaml and using it in your project can be found here -Delivery Pipeline Info
As a pipeline author I want to enhance the quality and speed up the process of adding more steps to a delivery pipeline by basing my new stage in the pipeline on a (preloaded) template. The template should hold the correct environment variables settings, jobs and input.This should apply when I hit the plus sign of the Add Stage button.
What is the correct way to base a stage on a template when adding a new one to a Delivery Pipeline?
the output ofcall.pyis buffered. so you have toflush()it to send tomain.py.#!/usr/bin/python2 import sys getMsg = raw_input() print getMsg sys.stdout.flush() getMsg2 = raw_input() print getMsg2 sys.stdout.flush()Note that you need shebang#!/usr/bin/python2at least when your OS is Linux (I don't know why OP's code works without shebang. Maybe some Windows magic ?).Also you can use-uoption not to buffer the output of python.player_pipe = subprocess.Popen(["/usr/bin/python2","-u","./call.py"], stdin=PIPE, stdout=PIPE, stderr=STDOUT, shell=False)
success python pipe stdin, out only one time this sourcemain.pyimport subprocess from subprocess import PIPE, STDOUT player_pipe = subprocess.Popen(["source\call.py", 'arg1'], stdin=PIPE, stdout=PIPE, stderr=STDOUT, shell=True) player_pipe.stdin.write("Send Msg\n") get_stdout = player_pipe.stdout.readline() print("[Get Msg]" + get_stdout) player_pipe.kill() player_pipe.wait()call.pyimport sys getMsg = raw_input() print getMsgbut I want twice or more time stdin, outso update source but it's not workWhat's wrong this sourcemain.py (update-not work)import subprocess from subprocess import PIPE, STDOUT player_pipe = subprocess.Popen(["source\call.py", 'arg1'], stdin=PIPE, stdout=PIPE, stderr=STDOUT, shell=True) player_pipe.stdin.write("Send Msg\n") get_stdout = player_pipe.stdout.readline() print("[Get Msg]" + get_stdout) player_pipe.stdin.write("Send Msg2\n") get_stdout = player_pipe.stdout.readline() print("[Get Msg]" + get_stdout) player_pipe.kill() player_pipe.wait()call.py(update-not work)import sys getMsg = raw_input() print getMsg getMsg2 = raw_input() print getMsg2:D
python pipe only stdin,out once, how to do twice or more time
Use eval:eval docommand $x $y <(anothercmd arg1 $1 arg3) $zexample$ f='<(ps)' $ echo $f <(ps) $ cat $f cat: '<(ps)': No such file or directory $ eval cat $f PID TTY TIME CMD 4468 pts/8 00:00:00 mksh 4510 pts/8 00:00:00 bash 4975 pts/8 00:00:00 bash 4976 pts/8 00:00:00 cat 4977 pts/8 00:00:00 ps $
I have a bash script that normally works like this:[...] file=$1 docommand $x $y $file $zbut I'd like to add an option to the script that would tell it to get the data from a command using ananonymous named pipeinstead of the file.I.e.,I'd like to do something approximatingfile=<(anothercmd arg1 $1 arg3)and have mydocommand $x $y $file $zexpand todocommand $x $y <(anothercmd arg1 $1 arg3) $zIs there a way to get the quoting right to accomplish that?For more concrete context, the script looks at output products from regression tests, normally diffing them from a file with the expected output. I'd like to optionally pass in a revision and diff to the then-expected result, sodiff $from $towould expand todiff <(hg cat -r $fromrev $from) $to.
Substituting a command into the parameter list of another command in bash
Each log format involves it own logic when doing pattern matching, but I would propose to you something like:(test.log is something like: date - context - type - msg)$file = get-content test.log $pattern = '^(.*) - (.*) - (.*) - (.*)$' $findings = $file | select-string -pattern $pattern $lines = foreach($f in $findings) { if ($f -match $pattern) { $line = @{ 'date' = $matches[1] 'context' = $matches[2] 'level' = $matches[3] 'message' = $matches[4] } new-object -typename psobject -property $line } } $lines | export-csv -notypeinformation -delimiter ';' -path 'test.csv'
I would like to use PowerShell to parse a log, and output to a CSV file.What is the basic way to accomplish this?
How to pipe an log file CSV in PowerShell?
I think you have to wrap it in a view.Remote side:create or replace view vv as select * from table(dp.testpipe());Local:select * from vv@DB_DCPRF_WSTAT;Passing parameters may get tricky depending on your requirement.https://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:6020795600346824518
I try to call pipelined table function on other server by dblink, but get error"ORA-30626: function/procedure parameters of remote object types are not supported".Maybe exist some way, how i can execute this function? All works, when not remote.My query:select * from table(DP.testpipe@DB_DCPRF_WSTAT)
Oracle Pipelined Table Functions
Google Cloud Dataflow SDK for Python is ready for use, with Beta level of support in Google Cloud Platform at this time. It is based on the Apache Beam codebase. Please follow theQuickstartto get started with this SDK. If you see a specific error, please ask a separate question and quote the specific problem.That said, the SDK for Python doesn't provide an API to access Google Cloud Datastore directly yet. You could write one using the genericSourceandSinkAPIs. This is not hard, but not trivial either. This is something we are actively working on, and the Python SDK will include this API in the near future.In the meanwhile, I'd suggest perhaps trying the SDK for Java for this task, which includesDatastoreIOandBigqueryIOAPIs.
We have a problem about data transfer from Google Cloud Datastore into Bigquery. We need to create dataflow script in python for this job. This job should transfer data from datastore to bigquery by using pipeline in python. For this job in python, it requires "Apache Beam" library.But Apache Beam library is not working. Could anyone help us ?
Data Transfer from Google Datastore into Bigquery by using Dataflow Pipeline in Python
The thing is that by default the timestamps are not preserved when working with mpeg ts..You need to use the tsparse and its propertyset-timestamps=trueto add the timestamps.. then you will be able to mux it back properly when needed..gst-launch-1.0 ... ! tsparse set-timestamps=true ! video/mpegts ! tsdemux ! ...What document you looked into? The second quote about stamp does not make sense to me..You either have some sort of timing information or you are lost.. the timestamps must be there in video/audio stream - no matter if its from file or rtp , mpeg-ts or whatever..If you still have problems, then update question with actual pipeline.. because now I am just guessing what you are actually doing.HTH
I was trying to change the frame size of mpeg2 transport stream video using gstreamer pipeline. The procedure was: Fist, separated the video portion and audio portion using tsdemax, then, went through mpeg2dec, capsfilter (change the frame size), mpeg2enc, and mpegtsmux to combine the audio portion of the stream. The mpegtsmux had no output.I searched, and found a document said:that the nature of mpeg2enc leads to it output not having metadata on timestamps(which might be the cause of the problem), and suggested:then stamp can easily help one out if needed, as in the fragment (mpeg2enc format=3 ! stamp ! avimux)I am using gstreamer 1.0 'C' library, and couldn't find the element "stamp". I appreciate if someone can help me why the video through mpeg2enc can't mux with the audio, and if it is caused by the lack of timestamp, how to add a timestamp on or after the mpeg2enc?
gstreamer mpeg2enc has no timestamp
break it down and then implement itv, k and j come in as registers we assume.you need to build the address v+(k<<2) and v+(j<<2) You can use scratch registers, I assume you can trash the k and j incoming registers too since you wont need them anymore.k = k << 2; k = k + v j = j << 2; j = j + v temp0 = load(k) temp1 = load(j) store(k) = temp1 store(j) = temp0and you should be able to convert that to asm, can re-arrange some of the instructions and have it still work.Edit, I will let you figure it out, but I didnt cheat and compile first. But found that gcc produced the same basic sequence of instructions. Two shifts, two adds two loads, then two stores.
Closed.This question needsdebugging details. It is not currently accepting answers.Edit the question to includedesired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.Closed7 years ago.Improve this questionI was trying to convert C code to MIPS assembly. Here's a snippet. The problem is that i'm not too sure if i am going along the right track. I'm hoping, someone could help.This was the original question:void swap(int v[], int k, int j) { int temp; temp = v[k]; v[k] = v[j]; v[j] = temp; }and this is how far i have got:swap: addi $sp, $sp, -4 sw $s0, 0($sp) add $s0, $zero, $zero L1: add $t1, $s0, $a1 lb $t2, 0($t1) add $t3, $s0, $a0 sb $t2, 0($t3) beq $t2, $zero, L2 addi $s0, $s0, 1 j L1 L2: lw $s0, 0($sp) addi $sp, $sp, 4 jr $raAlright this is as far as i have got. Am i doing this right or am i completely lost it!?
C-code to assembly [closed]
I notice you have skimped the usually essential checking of return values fromopenandreadandwrite. If you had, you might have noticed the error in this linewrite(fd, string, sizeof(string));Becausestringis a pointer, you are sending 8 bytes (the size of the pointer). You should usestrlen(string)or that+1, depending on whether the terminator needs to be sent.write(fd, string, strlen(string));You repeat the mistake in your recent unwise edit:read(fd, input, sizeof(input));You would be better sticking with the original#defineand using that for both buffer allocation and read request size.
So I'm trying to implement a basic FIFO pipeline in C using mkfifo(). Here are my code classes so far:main.c:int main(int argc, char *argv[]) { char *path = "/tmp/fifo"; pid_t pid; setlinebuf(stdout); unlink(path); mkfifo(path, 0600); pid = fork(); if (pid == 0) { client(path); } else { server(path); } return(0); }client.c:void client(char *path) { char *input; input = (char *)malloc(200 * sizeof(char)); read(STDIN_FILENO, input, 200); struct Message message; message = protocol(input); //protocol simply takes an input string and formats it char number = message.server; char* string; string = message.string; int fd; fd = open(path, O_WRONLY); write(fd, string, sizeof(string)); printf("Client send: %s\n", string); close(fd); return; }server.c:void server(char *path) { int fd; char *input; input = (char *)malloc(200 * sizeof(char)); fd = open(path, O_RDONLY); read(fd, input, sizeof(input)); printf("Server receive: %s\n", input); close(fd); return; }Now the pipeline is working, but for some reason the server only receives part of the message. For example if we get the following string from the protocol: "HELLO WORLD" We get the following output:Server receive: HELLO WO Client send: HELLO WORLDThe server should receive the whole message, but it isn't. What am I doing wrong? Thank you for any help!
FIFO Pipelining Server only receiving certain amount
another thing to be aware of here is that if your application finds old configuration sets (depending on your uCommerce version) old configuration could be picked up. uCommerce automatically picks up "components.config" by scanning the website to find it. If, for some reason (usually VS publish) there's another set of confiuration files, that could unintentionally be picked up.This was fixed in version6.1.1.14217. But as martin suggests, recycling the app pool is most likely the cause!Hope this helps you. Best regards Morten
I've added and removed pipeline tasks from the Basket.config and Custom.config but whatever I do nothing happens. Even if I remove the files, the previously registered custom pipeline tasks are excecuted.I don't understand why. What do I have to do to be able to edit these files and make the changes excecute?
ucommerce custom pipeline configs not working
Where to start... Are you sure're up for this?What are you trying to do with the lines of the file? You might be better off not iterating like your example, just usingsed,awk, orgrepon it like this example:sed -e 's/apple/banana/' $TheFileThat will output the contents of $TheFile, replacing all occurrences of "apple" with "banana". That's a trivial example, but you could do much more.If you really want to loop, then remove the$()from your example. Also, you cannot have a space after=in your code.
I need to write a script which gets a file from stdin and run over the lines of it.My question is can I do something like that :TheFile= /dev/stdin while read line; do { .... } done<"$(TheFile)"or can I write--done<"$1"or in that case the minute I send a parameter to the function which is a file it will be sent to the while function ?
Bash difference between pipeline and parameters
If I understand your question correctly, you want the output ofcmdto be written tofile.outandalsoused as the input tocmd2. For this case, you could try inserting theteecommand (with the-aoption to append) into your command pipeline:cmd1 | tee -a file.out | cmd2 >> file.outExample$ printf "one\ntwo\nthree\n" | tee -a file.out | sed 's/.*/\U&/' >> file.out $ cat file.out one two three ONE TWO THREEAnswer to edited version of the questionThe following construct should do what you want:{ time cmd1; } 2>> file.out | tee -a file.out | cmd2 >> file.outSince thetimeutility provided by Bashoperates on thecomplete pipeline, curly braces are used togroup these commandsso that they can be considered as a whole. Note: the terminating semi-colon (;) is required before the closing brace.The standard out stream ofcmd1is piped through to theteecommand but since Bash’stimeutility prints its timing statistics tostandard error, the file descriptor2is redirected so that the timing statistics are appended tofile.out.Modified version of previous example{ time printf "one\ntwo\nthree\n"; } 2>> file.out | tee -a file.out | sed 's/.*/\U&/' >> file.out
I have two commands say cmd1 and cmd2, wherein i performtime cmd1 | cmd2I want to get something likecmd1 >> file.out and {time cmd1 >> file.out} | cmd2 >> file.outso can someone suggest how it is actually done? edit: as Anthony's answer below suggests, tee works here but if i writetime cmd1 |tee -a file.out | cmd2 >> file.outthen it only writes the output of cmd1 to file.out and cmd2 to file.out, whereas i also want the output of {time cmd1} to that file.I am using bash shell on Ubuntu Mate. If the time keyword complicates it, please suggest some method to time the execution and do the exact operation.
How do I pass output of (both) piped instructions to file?
I would consider using just using a socket, WCF or msmq (message queue). I am often faced with problems like this when configuring windows services - how can the desktop application communicate with the service. I am currently using the message queue for inter-application communication in a number of cases and I have found it to be relatively simple and effective. It is often overlooked for jobs like this.
Closed. This question isopinion-based. It is not currently accepting answers.Want to improve this question?Update the question so it can be answered with facts and citations byediting this post.Closed7 years ago.Improve this questionI've looked everywhere to find some help with this question but it's all for communicating with web clients or from web clients.I want a main app running from a "server" and the client app running from desktops on a network. I want the main app to be able to call functions in the client apps. These client apps are used to gather info and send back to the main app. What would be the best way, and the most "cost" effective to achieve this?I've read a bit about pipelines, but not sure if this can be used?Thank you.
How to communicate from main app to client app C# [closed]
Please try again. There was an issue with Delivery Pipeline, but everything should be back to normal now.
I made some changes to the local code and tested with no problems. Committed to repo via git succesfully. It's staying at the building stage and precisely at the cloning stage and doesn't continue. There is no more logs showing anything useful. Please see attached screenshot. Hopefully someone knows what's happening? A lot of thanks.
Bluemix building stuck like forever
Theman page forlsgives you your answer:If standard output is a terminal, the output is in columns (sorted vertically) and control characters are output as question marks; otherwise, the output is listed one per line and control characters are output as-is.When piping to another command, your output is not a terminal (i.e. an interactive login session.)
This question already has answers here:Why does ls give different output when piped(3 answers)Closed8 years ago.I have a confusion about pipeline operation. E.g.:# ls *.cfg anaconda-ks.cfg initial-setup-ks.cfg # ls *.cfg | cat anaconda-ks.cfg initial-setup-ks.cfgWhen executinglsoperation only, it displays items separated by blank space (or tab):anaconda-ks.cfg initial-setup-ks.cfgBut through the pipeline, it seems the space is replaced by the new line:anaconda-ks.cfg initial-setup-ks.cfgHow to understand it? Is it the pipeline that modify the separator?
Does the pipeline modify the separator of words? [duplicate]
OK, now I understand - don't do that! Execute cygwin commands from cygwin, not from Windows. To execute a cygwin command on any file, just give the command the full path to the file (but starting with/cygdrive/), it doesn't have to be underC:/cygwin, e.g. from a cygwin shell window to see what's in the common Windows folderC:\Documents and Settings:$ ls -Q '/cygdrive/c/Documents and Settings' "All Users" "Default" "Default User" "desktop.ini" "emorton" etc...
I'm trying to run experiments on a text file to get word frequencies. I tried using the following command:gawk -F"[ ,'\".]" -v RS="" '{for(i=1;i<=NF;i++) words[$i]++;}END{for (i in words) print words[i]" "i}' myfile.txt | uniq -c | sort -nr | head -10But I get the following error:gawk: cmd. line:1: fatal: cannot open file '|' for reading (No such file or directory)I read somewhere that ';' may be used instead of '|' on Windows machines, although this results in a similar error.It seems as though it is reading the first instance of '|' as a file name. Is this the correct way of piping on a windows machine? Is piping possible on a windows machine using Cygwin?EDIT: I added cygwin to windows PATH variable and then used cmd window. If i wanted to actually use cygwin.exe, does that mean I would have to place any files I wanted to edit within C:/cygwin ?
How to correctly pipe commands in Cygwin (Using Windows)?
When experimenting with the pipeline I wasn't able to get it working with Mono either, but if you can get away with just the CoreCLR on Linux then you should be able to. Kestrel, for example, doesn't require Mono anymore.This was a build script from the beta7 timeframe but it should be close to what's needed to use RC1 now:#!/bin/bash sudo apt-get update sudo apt-get -y install libunwind8 gettext libssl-dev libcurl3-dev zlib1g curl -sSL https://raw.githubusercontent.com/aspnet/Home/dev/dnvminstall.sh | DNX_BRANCH=dev sh && source ~/.dnx/dnvm/dnvm.sh dnvm install 1.0.0-beta7 -r coreclr -a x64 cd src/dotnetstarter dnu restore dnu build cd ../../test/dotnetstarter.tests dnu restore dnu build dnx test cd ../../src/dotnetstarter dnu publish --runtime ~/.dnx/runtimes/dnx-coreclr-linux-x64.1.0.0-beta7The app washttps://github.com/IBM-Bluemix/asp.net5-helloworldand I added the dotnetstarter.tests project which I was trying to run in the pipeline (the dnx test step). The last publish step isn't required but is included to show it was working.
I am trying to build an ASP.NET5 application via Bluemix Pipeline using a shell script to configure a runtime that supports .NET builds with DNVM. When building the application we need to get dependencies from Mono 4.0 (such as kestrel) but the latest Mono available viaapt-getis 3.2. I tried to resolve this by adding the Mono deb repository in/etc/apt/sources.listso that anapt-get updatewould fetch the latest Mono package but due to a permission error we are not allowed to altersources.listnor add or alter any files in/etc/apt/sources.list.d/*.For example, running:sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF sudo echo "deb http://download.mono-project.com/repo/debian wheezy main" | sudo -i tee /etc/apt/sources.list.d/mono-xamarin.listWill result in:sudo: no tty present and no askpass program specifiedNot usingsudowill give a permission issue and I think we have exhausted all possible workarounds such asssh -t -tand forth.Does anyone have any suggestions on a workaround for this or an alternative method to run a shell script where a .NET build with DNVM and all dependencies would be supported? Using another language orcf pushin this case is not an option, we really want to push .NET through pipeline at any cost.
Pipeline Shell Script Permission Issue on .NET Build Attempt