Response
stringlengths
8
2k
Instruction
stringlengths
18
2k
Prompt
stringlengths
14
160
TFS doesn't have a file system. It has source code stored in a repository (either TFVC or Git). If you want to perform SonarQube analysis, you'll need to get the code out of the repository first. The most common approach is to integrate static analysis into your automated build process.If you're using TFS 2015 or Team Services, there areSonarQube tasks readily availableto run SonarQube analysis.
I want to usesonarqubeto analyze code that is stored in theTFSfile system, which is in a remote machine.So I have on:machine A -> sonarqube machine B -> TFSThismachine Bcontains three projects:proj1,proj2, andproj3I need to configuresonarqubeto "look" tomachine Bto those 3 projects and analyze them.On mysonar-project.propertiesfile I have the following:# must be unique in a given SonarQube sonar.projectKey=my:proj1 # this is the name displayed in the SonarQube UI sonar.projectName=proj1 sonar.projectVersion=1.0 # sonar sources sonar.sources=. <----what do I need to put here? # Language sonar.language=cs # Encoding of the source files sonar.sourceEncoding=UTF-8What do I need to put on the fieldsonar.sources? I thought I just needed theIP/Portlocation of the TFS root of the three projects ...Something like:http://192.168.1.102:3000?sonar.sources=http://192.168.1.102:3000 # doesn't work :(But it doesn't work either...Also, how can I set the:sonar.projectKeyandsonar.projectNameto analyze all three projects? In my above example is just looking atproj1.
How to set sonarqube to analyze remote code from TFS filesystem?
Since SonarQube 5.4, an API exists in order to be able to authenticate a user using an external provider :If the provider is compatible with OAuth2 you have theOAuth2IdentityProviderOtherwise use theBaseIdentityProviderSince 5.5, you can alsoassociate groupsfrom the provider on the user.Please have a look at theGitHub Authentication pluginwhich allowing users to authenticate from GitHub and to associate GitHub groups to SonarQube groups.
I have a custom feature to implement in SonarQube 5.5.I have a plugin which is basically a fork of theSonar Open ID Pluginto reflect our company's SSO authentication. It is working fine and it uses cookie based authentication to authenticate users and bring them back into Sonar once validated successfully.There is one more change that I have been told to make. When the user gets authenticated successfuly by the external authentication provider ( Company's SSO service ) and the user is created ( if not present already), then I need to validate the user. On the basis of the user's validation, I need to add the user to a specific group. If the validation passes, then add this user to group A( Sonar group). If not, then add him to group B( Sonar group).I cannot use the default group behaviour as in both cases, I need them to be added to some Sonar Group.This is to be done in a Sonar Plugin for SonarQube 5.5. Can someone tell me how to do that using their Extension Points using the OpenID PLugin as a starting point? Do note that all the existing features of the OpenID Plugin ( auto creation of users if not existing) need to be retained.Some sample code on how to do this would be really helpful.THanks in advance!
How do I add user to a specific group based on a condition after they are authenticated in Sonar
There is unfortunately no way to import the dotcover XML format. Your best bet is probably as you mentioned generate the dotcover HTML format as you mention above.There is one alternative - I wonder if it is not more convoluted that generating the dotcover HTML format from the start - : Convert the dotcover XML files into the SonarQubeGeneric Test Coverage XML formatand import with theGeneric Test Report plugin. This is usually quite easy to do with a XSLT transformation
Is there any way to use the.xmloutput from a DotCover analysis when importing into SonarQube using the scanner for MSBuild?Currently running:SQ 5.4C# Plugin 5.0This is the specific SonarQube property key in question:sonar.cs.dotcover.reportsPathsThe example shown in thisguidetells me that only the.htmlextension is accepted. Why is that? I am coming from a world where we save the report outputs in.xmland later transform it into an.htmlreport. Is there a way to use the.xmloutput?
SonarQube C# DotCover report in xml
I take for good Simon's answer in the comment:"I confirm that it's not possible to log username in access.log. It will be fixed in versions 6.x : jira.sonarsource.com/browse/SONAR-7581" -- Simon Brandhof - SonarSource
The current access logs in the logs directory, does not contain the username. The sonar.properties configuration file do not seem to provide a way to customize the log format in a way to log the username. Is there any way to match the logged username in each access.log entry?
Is there a way to log username and client IP address in SonarQube 5.x access.log?
You cannot mix various differential periods in a Quality Gate since SonarQube 5.4. As perupgrade notes:In SonarQube 5.4, quality gate conditions can now only check absolute values or differential values for the Leak periodSo while you may still customise theLeak Periodat a project level (seethis question), all Quality Gate conditions will either use that same period or the absolute value.
I've got an issue with my Quality Gate defines. For a single gate I have multiple rules for checking against other results. There are for example some rules comparing new issues of several certainities to the last analysis and some rules against the last project version.In Differential View setup I can define multiple periods, e.g. last analysis or last version. When defining the rules I can only select Leak as type.How may I define which of the rules are compared to the last analysis (Leak Period 1) and which against the last project version (Leak Period 2)?Thanks for your help.Christian
Sonarqube 5.4 Leak Configuration issue
Now I got it to work.It was a silly mistake, my build service controller was running under a service account, andI needed to run NDepend first under that account to activate it.Done that all started working as expected.
I have a build in TFS 2013 running in a controller with NDepend installed. Our sonarqube instance is with the NDepend plugin installed.From the build I set it to run the sonarqube runner MSBuild.SonarQube.Runner.exe with these parameters:begin /k:Test /n:"Test" /v:1.0 /d:sonar.cs.ndepend.projectPath="C:\TMP\TEST.ndproj" /d:sonar.cs.ndepend.reportPath="C:\TMP\ndepend-report.xml"Caused by: org.sonar.api.utils.command.CommandException: NDepend execution failed with exit code: -532462766 [command: C:\tmp\NDepend_6.2.1.8630\Integration\SonarQube\NDepend.SonarQube.RuleRunner.exe C:\TMP\TEST.ndproj C:\TMP\ndepend-report.xmlSo in the end of the build I can see this message and nothing else:ERROR: ERROR: Re-run SonarQube Runner using the -X switch to enable full debug logging. The SonarQube Scanner did not complete successfully Post-processing failed. Exit code: 1How can I figure out what is wrong with it? If I run the command in my build server using CMD window it works, but running from Sonarqube runner it fails.
Sonarqube and NDepend? How to get data from NDepend during a build in TFS?
First, try using version 4.5.7 of SonarQube, as it is the current Long Term Support (LTS) version.Then, have a look at the SonarQube documentation onwriting custom rules in Java. Thejava-custom-rulesmaven project will then be a good starting point to write your own rules. It's an example project with some sampled rules.
I'm just a beginner towards static code analysis and I have adopted SonarQube for the purpose. Right now I'm stuck at the point to begin customization. I need to know theprocess or step by step guide to customize SonarQube i.e integrating my own rules for the language.I'm currently runningSonarQube 4.2 Sonar-Scanner 2.5.1 Eclipse Mars M1 (Eclipse 4.5.0) PostgreSQL 9.4
Beginners guide in Writing own Custom Rules for Java in SonarQube
This is what the platform exclusions mechanism is for. If I were you, I wouldn't try to put such exclusions into a rule implementation. Instead, just write a blanket/generic rule implementation and setIssue exclusionsat the global level to ignore the directories you want to leave out.
I am writing a custom rules plugin for the SonarQube Javascript Plugin. Now I want to disable my check in specific directories by the rule. (It should not be checked in the bin path for example)Now my question: how can I get the path of the file that is checked relative to the project path. Or another Solution would be: How to get the project path?My code is:@Override public void visitNode(Tree tree) { CallExpressionTree callExpression = (CallExpressionTree) tree; if (callExpression.callee().is(Kind.DOT_MEMBER_EXPRESSION)) { DotMemberExpressionTree callee = (DotMemberExpressionTree) callExpression.callee(); // get the file that is checked File toTestedFile = getContext().getFile(); //how to get the project path here to get the relative path??? if (isCalleeConsoleLogging(callee)) { addLineIssue(tree, MESSAGE); } } }
How to get project path in Sonarqube Javascript Extension
You can simply bind to a variable to capture your first result and then replace it in the secondList.update_at/3calldef update_2d(array, inds = [first_coord, second_coord], val) do updated = array |> Enum.at(second_coord) |> List.update_at(first_coord, fn(x) -> val end) List.update_at(array, second_coord, fn(x) -> updated end) endYou can also use thecaptureoperator to do this:def update_2d(array, inds = [first_coord, second_coord], val), do: array |> Enum.at(second_coord) |> List.update_at(first_coord, fn(x) -> val end) |> (&(List.update_at(array, second_coord, fn(y) -> &1 end))).()I find using a variable much more readable, but the option is there.
Consider a (smelly, non-idiomatic) function like the below:def update_2d(array, inds, val) do [first_coord, second_coord] = inds new_arr = List.update_at(array, second_coord, fn(y) -> List.update_at(Enum.at(array, second_coord), first_coord, fn(x) -> val end) end) endThis function will take in a list of lists, a list of two indices, and a value to insert within the list of lists at the location specified by the indices.As a first step to making this more Elixir-ey, I start laying pipe:array |> Enum.at(second_coord) |> List.update_at(first_coord, fn(x) -> val end)That gets me most of the way there, buthow do I pipe that output into the anonymous function of the lastList.update_atcall?I can nest it within the original call, but that seems like giving up:List.update_at(array, second_coord, fn(y) -> array |> Enum.at(second_coord) |> List.update_at(first_coord, fn(x) -> val end) end)
Is it possible to choose where the pipe output is inserted into Elixir function args?
HttpContext.Current.ApplicationInstance.CompleteRequest();Documentation
I have an HttpModule that has bound an event handler to EndRequest.Is there any way to handle the request inside the event handler? Meaning, I don't just want to run code and keep the request moving -- I want to stop it dead in its tracks, return a 200 Status Code, and call it a day, without it request continuing to the next step in the pipeline.
Can a request be handled and ended prematurely, early in the pipeline?
We have releasedPetastorm, an open source library that allows you to use Apache Parquet files directly via Tensorflow Dataset API.Here is a smallexample:with Reader('hdfs://.../some/hdfs/path') as reader: dataset = make_petastorm_dataset(reader) iterator = dataset.make_one_shot_iterator() tensor = iterator.get_next() with tf.Session() as sess: sample = sess.run(tensor) print(sample.id)
I am trying to design an input pipeline with Dataset API. I am working with parquet files. What is a good way to add them to my pipeline?
Tensorflow Dataset API: input pipeline with parquet files
+50Comment: So I can´t combine the sql-files and the output from my pythonscript together and pipe them to psql?Another approach, create your owncatwith Python, or add the first three line of code toimport_gtfs_to_sql.py, for instance:# Usage python import_gtfs_to_sql.py... | python myCat.py gtfs_tables.sql | psql mydbname #myCat.py import sys with open(sys.argv[1]) as fh: sys.stdout.write(fh.read()) sys.stdout.write(sys.stdin.read())Comment: I already know the error comes from type<(python...)TYPEcommand does not acceptstdin.Therefore your only solution is Option 2.Another approach is to use your Python Script to doprint gtfs_tables.sql.Question:the syntax for the filename, directory or filesystem is wrong.Find out from which part the above Error comes from.a) type gtfs_tables.sql b) type <(python ... c) type gtfs_tables.sql <(python ... d) type gtfs_tables.sql | psql mydbname e) type <(python ... | psql mydbnameSave the Output of<(python ...to a File and trypython ... > tmp_python_output type gtfs_tables.sql tmp_python_output | psql mydbname
I have the following code copied fromgithub gtfs_SQL_importer:cat gtfs_tables.sql \ <(python import_gtfs_to_sql.py path/to/gtfs/data/directory) \ gtfs_tables_makeindexes.sql \ vacuumer.sql \ | psql mydbnameI tried to run this on windows and replaced the call to the UNIX-commandcatby the windows equivalenttypewhich should work similar as ofis-there-replacement-for-cat-on-windows.However when I execute that code I get some error:The syntax for the filename, directory or filesystem is wrong.So I tried to limit the number of piped files to only combine the call to python and the call topsql:type <(C:/python27/python path/to/py-script.py path/to/file-argument) | psql -U myUser -d myDatabasewhich results in the same error.However when I execute the python-script alone it works as expected:C:/python27/python path/to/py-script.py path/to/file-argumentSo I assume the error results from usingtypein order to pipe the result of the script directly topsql.Does anyone know the correct syntax?EDIT: To ensure the problem is not related to a file not being found I used absolute paths for all arguments within my command except thetypeand thepsql-command (which are both handled via the%PATH%-variable).
How to pipe multiple sql- and py-scripts
Edit (03/2018)Since writing this question in 2012 and answering it in 2014, numerous tools have come online to support what I originally wanted. Jenkins now supports scripted pipelines natively and has an excellent UI (Blue Ocean) for rendering them. Those stumbling on this question should consider using these for their pipeline needs.https://jenkins.io/doc/book/pipeline/https://jenkins.io/projects/blueocean/End edit(Old answer)It didn't exist when I asked the question, but Jenkins' Build Flow Plugin does exactly what I needed, and creates pipeline views very well.https://wiki.jenkins-ci.org/display/JENKINS/Build+Flow+Plugin
I need a tool that will graphically represent our build pipeline. The below screenshots of ThoughtWorks Go and the Jenkins Pipeline plugin illustrate almost exactly what I want it to look like.The problem is that we already use Jenkins for our builds and deployments, along with a few other custom tools for orchestration type duties. We don't want a pipeline tool to do the builds or deployments itself, it just needs to invoke Jenkins! I tried out Go, and the first thing it asked for is where my source code is and how to build it. I couldn't get Go to work in a way where Jenkins does the builds but Go creates the pipeline.I've also experimented with the Jenkins Pipeline plugin, but it's very limiting. For one, it doesn't work with the Join plugin (so we can't have jobs run in parallel, which is a requirement). It also assumes that all of our tasks happen in Jenkins (Jenkins can't see outside of our test lab and into our production environment). I don't know if this is a viable option either.So, does anyone have any recommendation for some pipeline tools that will do what I'm looking for?
What is a good tool for Build Pipelines?
Glad to see I'm not the only one who has this issue. I've just been opening MGCB from the taskbar. Not exactly a solution, but it avoids the error message.
When I try to open MonoGame pipeline (MGCB Editor), I get an error that says either"Catastrophic failure (Exception from HRESULT: 0x8000FFFF (E_UNEXPECTED))"or"The Extender Provider failed to return an Extender for this object". The editor still opens, and I can add and build items there, but the error bothers me.I tried to reinstall MonoGame, but it didn't help. It appeared out of nowhere and I have no clue why or how to fix it.
Why do I get "Catastrophic failure (Exception from HRESULT: 0x8000FFFF (E_UNEXPECTED))" error when opening Monogame pipeline?
In my project a have same problem (I think).Error after build: COM library no registered.But I registered my COM library with C:\windows\system32\regsvr32.exe and registered x64 dll.My solution: I change BuildPlatform from "Any CPU" to "x64".
Background: Few Projects in my solution, are dependent on COM libraries. So these COM DLLs has to be registered before building the actual solution.In Azure DevOps - Pipeline - Build - Task, I added a "Command Line" agent job, with the following command,Scenario 1:C:\windows\system32\regsvr32.exe /s [DLLFilePath]\[DLLName].dllScenario 2:CD [DLLFilePath]C:\windows\system32\regsvr32.exe /s [DLLName].dllBut both the scenarios return the same error during build time,[error]Cmd.exe exited with code '3'.Note:The DLL is copied to the above mentioned location using a seperate agent job, before invoking regsvr32.[DLLFilePath]\[DLLName].dll is a local path in the build agent, say c:\..\someLibrary.dll
Azure DevOps Pipeline Build - register COM DLL
That greedy search using a LLM might work if you do a beam search but at each leaf node you then generate an embedding for what you have generated so far down each branch and then compared the distance of that branch with the embedding your searching for... now that I write it out, it strikes me as a bit like the A* algorithm! You would stop when your within some distance threshold or you start to get further from the target? I assume if it is guided by the LLM it would be grammatical?Might give it a shot when i have a moment...
Is there a method for converting Hugging Face Transformer embeddings back to text?Suppose that I have text embeddings created using Hugging Face'sClipTextModelusing the following method:import torch from transformers import CLIPTokenizer, CLIPTextModel class_list = [ "i love going home and playing with my wife and kids", "i love going home", "playing with my wife and kids", "family", "war", "writing", ] model = CLIPTextModel.from_pretrained("openai/clip-vit-large-patch14") tokenizer = CLIPTokenizer.from_pretrained("openai/clip-vit-large-patch14") inputs = tokenizer(class_list, padding=True, return_tensors="pt") outputs = model(**inputs) hidden_state = outputs.last_hidden_state embeddings = outputs.pooler_outputMy embeddings are in the variable "embeddings". Questions:Is it possible for me to convert my embeddings back to the input strings in "class_list"? To be precise: If I sent the embeddings to a person who had no foreknowledge of the list of original strings; what steps would they need to take to extract the list of the original strings?If so, how can I do this?
Converting Hugging Face Transformer Text Embeddings Back to Text
You can encapsulate yourPipelineinto aColumnTransformerwhich allows you to select the data that is processed through the pipeline as follows:import pandas as pd from sklearn.preprocessing import StandardScaler from sklearn.impute import SimpleImputer from sklearn.compose import make_column_selector, make_column_transformer col_to_exclude = 'A' df = pd.DataFrame({'A' : [ 0]*10, 'B' : [ 1]*10, 'C' : [ 2]*10}) numerical_transformer = make_pipeline SimpleImputer(strategy='mean'), StandardScaler(with_mean=False) ) transform = ColumnTransformer( (numerical_transformer, make_column_selector(pattern=f'^(?!{col_to_exclude})')) ) transform.fit_transform(df)NOTE: I am using here a regex pattern to exclude the columnA.
In general, we willdf.drop('column_name', axis=1)toremovea column in a DataFrame. I want to add this transformer into a PipelineExample:numerical_transformer = Pipeline(steps=[('imputer', SimpleImputer(strategy='mean')), ('scaler', StandardScaler(with_mean=False)) ])How can I do it?
Adding Dropping Column instance into a Pipeline
I think using Option.map would be more idiomatic:let g x = x |> intOption |> Option.map foo |> Option.map bar
I like to use the pipe operator '|>' a lot. However, when mixing functions that return 'simple' values with functions that return 'Option-Typed-values', things become a bit messy e.g.:// foo: int -> int*int // bar: int*int -> bool let f (x: string) = x |> int |> foo |> barworks, but it might throw a 'System.FormatException:...'Now assume I want to fix that by making the function 'int' give an optional result:let intOption x = match System.Int32.TryParse x with | (true, x) -> Some x | (false,_) -> NoneOnly problem now is that of course the functionlet g x = x |> intOption |> foo |> barwon't compile due to typing errors. Ok, simply define an 'optionalized' pipe:let ( |= ) x f = match x with | Some y -> Some (f y) | None -> Nonenow I can simply define:let f x = x |> intOption |= foo |= barand everything works like a charme.OK, question: Is that idiomatic F#? Acceptable? Bad style?Remark: Of course, given the right types the '|=' operator allows to split and merge 'pipelines' with options at will while only care about options where they matter:x |> ...|> divisionOption |= (fun y -> y*y) |=...|>...
is an "optionalized" pipe operator idiomatic F#
Something like this?get-childitem -recurse -filter *.exe | %{Write-Host Examining file: $_.fullname; $_} | measure-object -Property length -Average
So, I've got a set of directories 00-99 in a folder. Each of those directories has 100 subdirectories, 00-99. Each of those subdirectories has thousands of images.What I'm attempting to do is basically get a progress report while it's computing the average file size, but I can't get that to work. Here's my current query:get-childitem <MyPath> -recurse -filter *.jpeg | Where-Object { Write-Progress "Examining File $($_.Fullname)" true } | measure-object -Property length -AverageThis shows me a bar that updates as each of the files gets processed, but at the end I get back no average file size data. Clearly, I'm doing something wrong, because I figure trying to hack the Where-Object to print a progress statement is probably a bad idea(tm).Since there are millions and millions of images, this query obviously takes a VERY LONG time to work. get-childitem is pretty much going to be the bulk of query time, if I understand things correctly. Any pointers to get what I want? AKA, my result would ideally be:Starting... Examining File: \00\00\Sample.jpeg Examining File: \00\00\Sample2.jpeg Examining File: \00\00\Sample3.jpeg Examining File: \00\00\Sample4.jpeg ... Examining File: \99\99\Sample9999.jpg Average File Size: 12345678.244567Edit: I can do the simple option of:get-childitem <MyPath> -recurse -filter *.jpeg | measure-object -Property length -AverageAnd then just walk away from my workstation for a day and half or something, but that seems a bit inefficient =/
Powershell Get-ChildItem progress question
You can override the complete function.class LuigiTaskB(luigi.Task): def run(self): print "running task b" with self.output().open('w') as out_file: print >> out_file, "some text" def output(self): return luigi.LocalTarget("somefile") class LuigiTaskA(luigi.Task): task_complete = False def requires(self): return LuigiTaskB() def run(self): print "running task a" self.task_complete = true def complete(self): # Make sure you return false when you want the task to run. # And true when complete return self.task_complete # This will out put : # running task b # running task a # And this on the second time you'll run: # running task aThe complete() function is looking at the output() function, by overriding complete() you can pass on any output and write your on complete condition.Notice that if your complete function depends on the run function it may not be skipped..
When looping over files with Luigi I do not what to be forced to save empty files just to show that the task was complete, and let the next task check if there are any rows in the txt, etc.How can I have a task showing it succeeded (i.e. the run method worked as expected) without outputting a file? Am I missing something here?
What to do when I don't want Luigi to output a file but show the task as complete?
One approach or workaround that you can use is with templates. I'm sure that the best option is the possibility to use "group variable" dynamically, while it isn't possible I believe that this is one option.trigger: tags: include: - develop - qa - production variables: - name: defaultVar value: 'value define yml on project repo' - group: variables-example-outside resources: repositories: - repository: yml_reference type: github ref: refs/heads/yml_reference name: enamba/azure_devops endpoint: enamba jobs: - ${{ if eq(variables['Build.SourceBranch'], 'refs/tags/develop') }}: - template: deploy.example.yml@yml_reference parameters: GroupVariablesName: variables-example-developer - ${{ if eq(variables['Build.SourceBranch'], 'refs/tags/qa') }}: - template: deploy.example.yml@yml_reference parameters: GroupVariablesName: variables-example-qa - ${{ if eq(variables['Build.SourceBranch'], 'refs/tags/production') }}: - template: deploy.example.yml@yml_reference parameters: GroupVariablesName: variables-example-productionInside of template:parameters: GroupVariablesName: '' jobs: - job: deploy_app displayName: 'Deploy application' variables: - group: ${{ parameters.GroupVariablesName }} - group: variables-example-inside steps: - script: | echo outsidevar '$(outsidevar)' echo defaultVar '$(defaultVar)' echo var by tag '$(variable)'
I have two environments on azure. One difference between them is only environment variables that came from variable groups. Is it possible to set up group name dynamically for one pipeline instead of set up two pipelines that can map their own group variables? It is an example of my build pipelinetrigger: - master - develop jobs: - job: DefineVariableGroups steps: - script: | if [ $(Build.SourceBranch) = 'refs/heads/master' ]; then echo "##vso[task.setvariable variable=group_name_variable;isOutput=true]beta_group" elif [ $(Build.SourceBranch) = 'refs/heads/develop' ]; then echo "##vso[task.setvariable variable=group_name_variable;isOutput=true]alpha_group" fi name: 'DefineVariableGroupsTask' - script: echo $(DefineVariableGroupsTask.group_name_variable) name: echovar # that works. - job: Test dependsOn: DefineVariableGroups pool: vmImage: 'Ubuntu-16.04' variables: - group: $[ dependencies.DefineVariableGroups.outputs['DefineVariableGroupsTask.group_name_variable'] ] # that doesn't work. Error here steps: - script: echo $(mode) displayName: 'test'
Can group name variable be dynamic in azure pipelines?
You also have the option of using advanced functions, instead of the basic approach above:function set-something { param( [Parameter(ValueFromPipeline=$true)] $piped ) # do something with $piped }It should be obvious that only one parameter can be bound directly to pipeline input. However, you can have multiple parameters bind to different properties on pipeline input:function set-something { param( [Parameter(ValueFromPipelineByPropertyName=$true)] $Prop1, [Parameter(ValueFromPipelineByPropertyName=$true)] $Prop2, ) # do something with $prop1 and $prop2 }Hope this helps you on your journey to learn another shell.
SOLVED:The following are the simplest possible examples of functions/scripts that use piped input. Each behaves the same as piping to the "echo" cmdlet.As functions:Function Echo-Pipe { Begin { # Executes once before first item in pipeline is processed } Process { # Executes once for each pipeline object echo $_ } End { # Executes once after last pipeline object is processed } } Function Echo-Pipe2 { foreach ($i in $input) { $i } }As Scripts:# Echo-Pipe.ps1Begin { # Executes once before first item in pipeline is processed } Process { # Executes once for each pipeline object echo $_ } End { # Executes once after last pipeline object is processed }# Echo-Pipe2.ps1foreach ($i in $input) { $i }E.g.PS > . theFileThatContainsTheFunctions.ps1 # This includes the functions into your session PS > echo "hello world" | Echo-Pipe hello world PS > cat aFileWithThreeTestLines.txt | Echo-Pipe2 The first test line The second test line The third test line
How do you write a powershell function that reads from piped input?
ThePipelineconstructor expects an argumentstepswhich is a list oftuples.Corrected version:pipeline = Pipeline([('text_length', AverageWordLengthExtractor()), ('scale', StandardScaler())])More info in the officialdocs.
I am trying to create a custom transformer for sklearn pipeline which will extract the average word length of a particular text and then apply standard scaler on it to standardize the dataset. I am passing a Series of texts to the pipeline.class AverageWordLengthExtractor(BaseEstimator, TransformerMixin): def __init__(self): pass def average_word_length(self, text): return np.mean([len(word) for word in text.split( )]) def fit(self, x, y=None): return self def transform(self, x , y=None): return pd.DataFrame(pd.Series(x).apply(self.average_word_length))then I created a pipeline like this.pipeline = Pipeline(['text_length', AverageWordLengthExtractor(), 'scale', StandardScaler()])When I execute the fit_transform on this pipeline I am getting the error,File "custom_transformer.py", line 48, in <module> main() File "custom_transformer.py", line 43, in main 'scale', StandardScaler()]) File "/opt/conda/lib/python3.6/site-packages/sklearn/pipeline.py", line 114, in __init__ self._validate_steps() File "/opt/conda/lib/python3.6/site-packages/sklearn/pipeline.py", line 146, in _validate_steps names, estimators = zip(*self.steps) TypeError: zip argument #2 must support iteration
Scikit-learn pipeline TypeError: zip argument #2 must support iteration
To set names of an object in a pipeline, usesetNamesorpurrr::set_names1:3 %>% purrr::map(~ rnorm(5, .x)) %>% purrr::set_names(paste0('V', 1:3)) #> $V1 #> [1] 1.4101568 2.0189473 1.0042691 1.4561920 0.8683156 #> #> $V2 #> [1] 2.0577889 2.4805984 1.4519552 0.9438844 0.4097615 #> #> $V3 #> [1] 0.4065113 4.0044538 2.8644864 2.4632038 4.05813801:3 %>% purrr::map(~ rnorm(5, .x)) %>% setNames(paste0('V', 1:3)) #> $V1 #> [1] 0.52503624 0.69096126 0.08765667 0.97904520 0.29317579 #> #> $V2 #> [1] 2.561081 1.535689 1.681768 2.739482 1.842833 #> #> $V3 #> [1] 2.619798 1.341227 2.897310 2.860252 1.664778Withpurrr::mapyou could also name yourinput_varsasmapkeep names.c(V1 = 1, V2 = 2, V3 = 3) %>% purrr::map(~ rnorm(5, .x)) #> $V1 #> [1] 1.74474389 1.69347668 -1.03898393 0.09779935 0.95341349 #> #> $V2 #> [1] 1.5993430 0.8684279 1.6690726 2.9890697 3.8602331 #> #> $V3 #> [1] 3.453653 3.392207 2.734143 1.256568 3.692433
I was wondering if it was possible to set the names of elements of a list at the end of a pipeline code.data <- input_vars %>% purrr::map(get_data) names(data) <- input_varsCurrently I pass a string of variables into a function which retrieves a list of dataframes. Unfortunately this list does not automatically have named elements, but I add them "manually" afterwards. In order to improve readability I would like to have something as followsdata <- input_vars%>% purrr::map(get_comm_data) %>% setnames(input_vars)However, this gives me the following error:Error in setnames(., input_vars) : x is not a data.table or data.frameDoes someone have an elegant solution to this?Any help is appreciated!
setnames in pipeline R code
I still think it is the chain of responsibility, even with the specific of not stopping the chain.Several patterns are very similar and they do have variations. I think the best way to see if a pattern fits a case is looking at its intent. From the GoF book:Chain of Responsibility "Avoid coupling the sender of a request to its receiver by giving more than one object a chance to handle the request. Chain the receiving objects and pass the request along the chain until an object handles it." (pg.223)So if there's no coupling between your handler and the objects going through the chain, I don't think it matters that the object will always fall to the end of the chain, even if handled.
I'm building a multiprocess architecture that seems to be a strange meld of a pipeline and a chain of responsibility. Essentially, I have a chain of handlers linked up by queues. Each handler will receive an object that represents the input data, forward it to the next handler so that it can start working on it, and then determine if it can do anything with that data.I don't believe I can call this a pipeline because one step doesn't really depend on the next. This also doesn't seem to be a traditional chain of responsibility because one handler can't prevent other handlers from handling that data. Is there a name for this design that will help me document this architecture? Or am I just going to have to call it something like "Pipeline of Responsibility"?
Would this be a pipeline, a chain of responsibility, or something else?
+500Adding thetrap -pcommand to the bash script, stopping the hung python process and runningpsshows what's going on:$ cat foo.sh #!/bin/bash trap -p cat /dev/urandom | tr -dc 'a-z1-9' | fold -w 4 | head -n 1 $ python -c "import subprocess; subprocess.call(['./foo.sh'])" trap -- '' SIGPIPE trap -- '' SIGXFSZ ko5o ^Z [1]+ Stopped python -c "import subprocess; subprocess.call(['./foo.sh'])" $ ps -H -o comm COMMAND bash python foo.sh cat tr fold psThus,subprocess.call()executes the command with theSIGPIPEsignal masked. Whenheaddoes its job and exits, the remaining processes do not receive the broken pipe signal and do not terminate.Having the explanation of the problem at hand, it was easy to find the bug in the python bugtracker, which turned out to beissue#1652.
I have a short bash scriptfoo.sh#!/bin/bash cat /dev/urandom | tr -dc 'a-z1-9' | fold -w 4 | head -n 1When I run it directly from the shell, it runs fine, exiting when it is done$ ./foo.sh m1un $but when I run it from Python$ python -c "import subprocess; subprocess.call(['./foo.sh'])" ygs9it outputs the line but then just hangs forever. What is causing this discrepancy?
Script works differently when ran from the terminal and ran from Python
As per thedocs:You must have a job in at least one stage other than .pre or .post.In the above configuration, no jobs other than the ones in.preare added on merge request events, hence no jobs are added at all.
Given the following.gitlab-ci.yml:--- stages: - .pre - test - build compile-build-pipeline: stage: .pre script: [...] artifacts: paths: [".artifacts/build.yaml"] lint-source: stage: .pre script: [...] run-tests: stage: test rules: - if: '$CI_COMMIT_BRANCH == "$CI_DEFAULT_BRANCH"' trigger: strategy: depend include: artifact: .artifacts/tests.yaml job: "compile-test-pipeline" needs: ["compile-test-pipeline"] build-things: stage: test rules: - if: '$CI_COMMIT_BRANCH == "$CI_DEFAULT_BRANCH"' trigger: strategy: depend include: artifact: .artifacts/build.yaml job: "compile-build-pipeline" needs: ["compile-build-pipeline"] ...The configuration should always run (any branch, any source). Tests and build jobs should be run only on the default branch.However, no jobs are run for merge requests, and manually triggering the pipeline on branches other than the default one give the errorNo Jobs/Stages for this Pipeline.I've tried explicitly setting an always run rule usingrules: [{if: '$CI_PIPELINE_SOURCE'}]to the jobs in the.prestage, but no dice.What am I doing wrong?
"No Stages/Jobs jobs for this pipeline" for branch pipeline
Got it! The line needs to go in the settings module for the project. Now it works!
I have spider that I have written using the Scrapy framework. I am having some trouble getting any pipelines to work. I have the following code in my pipelines.py:class FilePipeline(object): def __init__(self): self.file = open('items.txt', 'wb') def process_item(self, item, spider): line = item['title'] + '\n' self.file.write(line) return itemand my CrawlSpider subclass has this line to activate the pipeline for this class.ITEM_PIPELINES = [ 'event.pipelines.FilePipeline' ]However when I run it usingscrapy crawl my_spiderI get a line that says2010-11-03 20:24:06+0000 [scrapy] DEBUG: Enabled item pipelines:with no pipelines (I presume this is where the logging should output them).I have tried looking through the documentation but there doesn't seem to be any full examples of a whole project to see if I have missed anything.Any suggestions on what to try next? or where to look for further documentation?
Can't get Scrapy pipeline to work
This solved it for me:Browse tohttps://bitbucket.org/your-org/your-app/admin/addon/admin/pipelines/ssh-keysUnderKnown hostsenter your app domain, saymyapp.comand tapFetchWait for the fingerprint to be retrieved and then tapAddThe host address and fingerprint will be added to the bottom of the section
I have created a bitbucket pipeline under a repository and i have generated the SSH Keys and have updated the authorized_keys file in the host. Delivery part is carried by rsync, during the deployment phase i'm getting the following error.rsync -zrSlh --stats --exclude-from=deployment-exclude-list.txt $BITBUCKET_CLONE_DIR/ $DEPLOY_USER@$DEPLOY_HOST:$DEPLOY_PATH; Host key verification failed. rsync: connection unexpectedly closed (0 bytes received so far) [sender] rsync error: unexplained error (code 255) at io.c(235) [sender=3.1.2]My bitbucket-pipelines.yml as follows,image: php:7.2.1-fpm pipelines: default: - step: caches: - composer script: - apt-get update - apt-get install git -y - export APP_ENV=testing - curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer - composer install - step: name: Deploy to test deployment: test script: - apt-get update - apt-get install ssh -y - apt-get install rsync -y - rsync -zrSlh --stats --exclude-from=deployment-exclude-list.txt $BITBUCKET_CLONE_DIR/ $DEPLOY_USER@$DEPLOY_HOST:$DEPLOY_PATH;According to the documentation this yml should work, but i'm getting above error, your help is really appreciated and welcome.
Bitbucket pipline deploy with rsync - Host key verification failed
As I know, you could learn about Continuous Integration in Azure Data Factory. You could find below statement in theContinuous integration and deployment in Azure Data Factory.For Azure Data Factory, continuous integration & deployment means moving Data Factory pipelines from one environment (development, test, production) to another. To do continuous integration & deployment, you can use Data Factory UI integration with Azure Resource Manager templates. The Data Factory UI can generate a Resource Manager template when you select the ARM template options. When you select Export ARM template, the portal generates the Resource Manager template for the data factory and a configuration file that includes all your connections strings and other parameters. Then you have to create one configuration file for each environment (development, test, production). The main Resource Manager template file remains the same for all the environments.More detail steps and video,just refer to the above link.
I'm trying export one pipeline created in datafactory v2 or migrate to another, but not found the option,Could you help me please
how to export pipeline in datafactory v2 or migrate to another
You can considerthis solution:public class Pipeline { public Func<object, object> CreatePipeline(params Func<object, object>[] funcs) { if (funcs.Count() == 1) return funcs[0]; Func<object, object> temp = funcs[0]; foreach (var func in funcs.Skip(1)) { var t = temp; var f = func; temp = x => f(t(x)); } return temp; } }Usage:Func<int, string> a = x => (x * 3).ToString(); Func<string, bool> b = x => int.Parse(x.ToString()) / 10 > 0; Func<bool, bool> c = x => !x; var pipeline = new Pipeline(); var func = pipeline.CreatePipeline(x => a((int)x), x => b((string)x), x => c((bool)x)); Console.WriteLine(func(3));//True Console.WriteLine(func(4));//False
Does c#'s type system have the ability to specify a function that takes an enumerable number of functions which commute to form a pipeline?The effect would be similar to chaining, but instead ofvar pipeline = a.Chain(b).Chain(c)one could writevar pipeline = CreatePipeline(a,b,c)where a, b and c are functions? I have included a bit of sample code to illustrate, thanks.void Main() { Func<int, string> a = i => i.ToString(); Func<string, DateTime> b = s => new DateTime(2000,1,1).AddDays(s.Length); Func<DateTime, bool> c = d => d.DayOfWeek == DayOfWeek.Wednesday; //var myPipeline = CreatePipeline(a, b, c); Func<int, bool> similarTo = i => c(b(a(i))) ; Func<int, bool> isThisTheBestWeCanDo = a.Chain(b).Chain(c); } public static class Ext{ //public static Func<X, Z> CreatePipeline<X,Z>(params MagicFunc<X..Y>[] fns) { // return //} public static Func<X, Z> Chain<X,Y,Z>(this Func<X,Y> a, Func<Y,Z> b) { return x => b(a(x)); } }
C# pipelined function array signature
You might have already found a solution, but I had a similar issue. I am working with pandas and would like the ColumnTransformer to return a dataframe again. I do this by placing the column names back in order as they are used in the columntransformer, but I wanted to make sure it was correct so I wanted to inverse the transformation and check if it returned the original dataframe and thus hadn't mislabeled any columns.There are 2 ways to access the sub-transformers inside your tf:tf.transformers_[1][1] # second transformer, 2nd item being the actual class tf.named_transformers_['scaler']You can then call the inverse_transform for that particular sub-transformer. This only gives you the ability to do the inverse with one of the transformers so you'd have to then reconstruct your dataset by appending the results of both into 1 frame again.
I would like to understand how to apply inverse transformation in a pipeline, and not using theStandardScalerfunction directly.The code that I am using is the following:import pandas as pd import numpy as np from sklearn.cluster import KMeans from sklearn.compose import ColumnTransformer from sklearn.preprocessing import OneHotEncoder, StandardScaler categoric = X.select_dtypes(['object']).columns numeric = X.select_dtypes(['int']).columns tf = ColumnTransformer([('onehot', OneHotEncoder(), categoric), ('scaler', StandardScaler(), numeric)]) X_preprocessed = tf.fit_transform(X) model = KMeans(n_clusters=2, random_state=24) model.fit(X_preprocessed)After getting the output of a given model (KMeans in this case), how can I get back the original scale of thenumericvalues of any X dataframe?I knowStandardScalerhas a method (.inverse_transformation) to do that, but my question arises in the use of a pipeline withColumnTransformer.P.S.: The objective of doing so is to interpret the centroids of the model.
How to implement inverse transformation in a pipeline of a ColumnTransformer?
Azure Devops Pipeline: Second parameter dependent on the first parameterI am afraid there is no such out of way to resolve this question at this moment.That because theconditionsdoes not support for parameters at present. We could not add condition to the second parameter to set the value based on the first parameter.As we know, theRuntime parametersis used to let you have more control over what values can be passed to a pipeline.Since the second parameter is depend on the first parameter, does not require us to manually control it.So, as workaround for this question, we could use theLogging Commandwith condition to set the second parameter based on the value of first parameter:parameters: - name: parametr1 displayName: example1 type: string default: first values: - first - second - third trigger: none jobs: - job: build displayName: build pool: name: MyPrivateAgent steps: - task: InlinePowershell@1 displayName: 'SetVariableV1' inputs: Script: 'Write-Host "##vso[task.setvariable variable=parametr2.1;]123456"' condition: and(succeeded(), eq('${{ parameters.parametr1 }}', 'first')) - task: InlinePowershell@1 displayName: 'SetVariableV2' inputs: Script: 'Write-Host "##vso[task.setvariable variable=parametr2.2;]true"' condition: and(succeeded(), eq('${{ parameters.parametr1 }}', 'second'))I test it on my side, it works fine.
Is it possible in azure devops that the choice of the first parameter by the user, determines the second parameter (type, displayName etc.)?For example:parameters: - name: parametr1 displayName: example1 type: string default: first values: - first - second - thirdAnd if the user selects "first" when starting the pipeline, the second parameter to enter:- name: parametr2.1 displayName: example2 type: numberBut if the user selects "second" when starting the pipeline, the second parameter to enter:- name: parametr2.2 displayName: example2.2 type: booleanThanks for any help :)
Azure Devops Pipeline: Second parameter dependent on the first parameter
Using a pipe sends the output (stdout) of the first command, tostdin(input) of the child process (2nd command). The commands you mentioned don't take any input onstdin. This would work, for example, withcat(and by work, I mean work likecatrun with no arguments, and just pass along the input you give it):ls | catFor your applications, this is wherexargscomes in. It takes piped input and gives it as arguments to the command specified. So, you can make it work like:ls | xargs du -sbBeware that by defaultxargswill break its input on spaces, so if your filenames contain spaces this won't work as you want. So, in this particular case, this would be better:du -sb *
I'm trying to use du command for every directory in the current one. So I'm trying to use code like this:ls | du -sbBut its not working as expected. It outputs only size of current '.' directory and thats all. The same thing is with echols | echoOutputs empty line. Why is this happening?
Why du or echo pipelining is not working?
Yes, we have come across this problem and it is because alpine linux edge signing keys rotated (link, according to this official announcement). You have to execute this command inside theDockerfile.apk add -X https://dl-cdn.alpinelinux.org/alpine/v3.16/main -u alpine-keysAnother way is to upgrade the base image, in your case themicrosoft/dotnet:2.2-aspnetcore-runtime-alpine, to a newer version.
I am new to this, but I enherited a project, where runtime build is created with dockerfile and commands like this:# Build runtime image FROM microsoft/dotnet:2.2-aspnetcore-runtime-alpine RUN echo "http://dl-4.alpinelinux.org/alpine/edge/testing" >> /etc/apk/repositories; RUN apk update && apk add libgdiplus RUN apk add --no-cache icu-libsThe gitlab pipeline shows this:Step 15/20 : RUN apk update && apk add libgdiplus 96 ---> Running in 95f8ebccb602 97fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/main/x86_64/APKINDEX.tar.gz 98fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/community/x86_64/APKINDEX.tar.gz 99fetch http://dl-4.alpinelinux.org/alpine/edge/testing/x86_64/APKINDEX.tar.gz 100ERROR: http://dl-4.alpinelinux.org/alpine/edge/testing: UNTRUSTED signature 101WARNING: Ignoring APKINDEX.24c95890.tar.gz: No such file or directory 102v3.10.9-43-g3feb769ea3 [http://dl-cdn.alpinelinux.org/alpine/v3.10/main] 103v3.10.6-10-ged79a86de3 [http://dl-cdn.alpinelinux.org/alpine/v3.10/community] 1041 errors; 10355 distinct packages available 105Service 'api' failed to build: The command '/bin/sh -c apk update && apk add libgdiplus' returned a non-zero code: 1I know that the keys have been rotated and I have to upgrade alpine somehow, but addingRUN upgrade, orRUN apk add -X https://dl-cdn.alpinelinux.org/alpine/v3.16/main -u alpine-keysdoesn't change anything. Can someone please tell me what do I need to do?
ERROR: http://dl-4.alpinelinux.org/alpine/edge/testing: UNTRUSTED signature
Yes - You use pipe. i.e.tail -f <some filename> | grep 'username'
This question already has answers here:How to 'grep' a continuous stream?(13 answers)Closed9 years ago.I am trying to tail a user in production log.Is it possible to usetail -f grep "username"
Is it possible to use tail and grep in combination? [duplicate]
What do you plan to use as keys for hashtable? other than that, it should be pretty easy to do even with simple foreach-object:Import-Csv Table.csv | Foreach-Object -begin { $Out = @{} } -process { $Out.Add('Thing you want to use as key',$_) } -end { $Out }Don't see need for any "change pipeline type" magic, honestly...?
I like to write a cmdlet "Convert-ToHashTable" which performs following task:$HashTable = Import-Csv Table.csv | Convert-ToHashTableImport-csv places an array on the pipeline, how can i change it to a hashtable in my Convert-ToHashTable cmdlet? In the Process part of the cmdlet i can access the elements, but i don't know how to change the type of the pipeline itselfProcess { Write-Verbose "Process $($myinvocation.mycommand)" $CurrentInput = $_ ... }Is there a way to return a complete hashtable as the new pipeline or create a new pipeline with type hashtable?
Changing powershell pipeline type to hashtable (or any other enumerable type)
<stdio.h>is buffered by default, andstdoutis line-buffered.Replace yourprintf("%s", s);withprintf("%s\n", s);(the ending newline would flush thestdoutbuffer) or add a call tofflush(NULL);just after it.Actually, your question is unrelated toping, but the pipe is buffered.You might do the lower levelpipe,fork,dup2,readsyscalls and manage explicitly the buffer on the pipe. Then callingpollcould be useful.You could consider using a ICMP pinging library likelibopingor consider instead doing an HTTP request (either using thewgetprogram, or preferablylibcurl; perhaps a simple HTTPHEADrequest could be enough). As a general advice avoid forking a process withpopenorsystem(because the commands available might not be the same on the target computer).Read some good Linux programming book, likehttp://advancedlinuxprogramming.com/
How can I capture the output of ping command via pipeline immediately ?Here is my code:int main () { FILE *cmd = popen ( "ping -c 3 google.com | grep icmp", "r" );//ping google char *s = malloc ( sizeof ( char ) * 200 ); while ( 1 ) { fgets ( s, sizeof ( char )*200, cmd ); printf ( "%s", s);//show outcome if ( strstr ( s, "icmp_req=3" ) != 0 ) break; } pclose ( cmd ); return 0; }When the program finished, it will show the output at the same time. But I would like to read the output immediately while the program execute.
Capturing output of ping in c
Using theget_params()function, you can get access at the various parts of the pipeline and their respective internal parameters. Here's an example of accessing'vect'text_clf = Pipeline([('vect', CountVectorizer()), ('tfidf', TfidfTransformer()), ('clf', MultinomialNB())] print text_clf.get_params()['vect']yields (for me)CountVectorizer(analyzer=u'word', binary=False, decode_error=u'strict', dtype=<type 'numpy.int64'>, encoding=u'utf-8', input=u'content', lowercase=True, max_df=1.0, max_features=None, min_df=1, ngram_range=(1, 1), preprocessor=None, stop_words=None, strip_accents=None, token_pattern=u'(?u)\\b\\w\\w+\\b', tokenizer=None, vocabulary=None)I haven't fitted the pipeline to any data in this example, so callingget_feature_names()at this point will return an error.
I am using a pipeline very similar to the one givenin this example:>>> text_clf = Pipeline([('vect', CountVectorizer()), ... ('tfidf', TfidfTransformer()), ... ('clf', MultinomialNB()), ... ])over which I useGridSearchCVto find the best estimators over a parameter grid.However, I would like to get the column names of my training set with theget_feature_names()method fromCountVectorizer(). Is this possible without implementingCountVectorizer()outside the pipeline?
retrieve intermediate features from a pipeline in Scikit (Python)
Your webcam transmits raw video output to whoever is listening to it. Addingvideoconvertencodes the raw video to a codec that can be manipulated by thevideoscaleelement and the end product of the manipulation to be understood by theautovideosinkelement.Sogst-launch-1.0 v4l2src device=/dev/video0 ! videoconvert! videoscale ! video/x-raw, width=2592, height=600 ! autovideosinkis telling gstreamer to get the raw video from the camera, encode it into something that we understand, alter the video, and display it.I really recommend that when you have doubts about an element, callgst-inspect-1.0 <element name>to see it's description and properties.
Closed.This question isnot about programming or software development. It is not currently accepting answers.This question does not appear to be abouta specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic onanother Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.Closed27 days ago.The community is reviewing whether to reopen this question as of27 days ago.Improve this questionI am not sure why this pipeline is breaking, I have gstreamer installed on linux from the websites exact instructions, any ideas?gst-launch-1.0 v4l2src device=/dev/video0 ! videoscale ! video/x-raw, width=2592, height=600 ! autovideosink -v Setting pipeline to PAUSED ... Pipeline is live and does not need PREROLL ... Setting pipeline to PLAYING ... ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Internal data stream error. Additional debug info: gstbasesrc.c(3055): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: streaming stopped, reason not-negotiated (-4) Execution ended after 0:00:00.000093207 Setting pipeline to PAUSED ... Setting pipeline to READY ... Setting pipeline to NULL ... Freeing pipeline ...If I change it to:gst-launch-1.0 v4l2src device=/dev/video0 ! videoconvert! videoscale ! video/x-raw, width=2592, height=600 ! autovideosink -vit will work but why will it not work the other way?
GStreamer pipeline broken [closed]
Use the unary form of,, PowerShell's array-construction operator:"a:b;c:d;e:f".Split(";") | ForEach-Object { , $_.Split(":") }That way, the array returned by$_.Split(":")is effectively output as-is, as anarray, instead of having its elements outputone by one, which happens by default in a PowerShell pipeline.,creates a - transient -wrapper arraywhose only element is the array you want to output. PowerShell thenunwrapsthe wrapper array on output, passing the wrapped array through.
I've written some pwsh code"a:b;c:d;e:f".Split(";") | ForEach-Object { $_.Split(":") } # => @(a, b, c, d, e, f)but I want this// in javascript "a:b;c:d;e:f".split(";").map(str => str.split(":")) [ [ 'a', 'b' ], [ 'c', 'd' ], [ 'e', 'f' ] ]a nested array@( @(a, b), @(c, d), @(e, f), )Why? and what should I do
Why does PowerShell flatten arrays automatically?
It's by design, according to the documentation:https://www.asp.net/aspnet/overview/owin-and-katana/owin-middleware-in-the-iis-integrated-pipeline.In the sectionStage Marker Rules, you could read the following:The OWIN pipeline and the IIS pipeline is ordered, therefore calls toapp.UseStageMarkermust be in order. You cannot set the event handler to an event that precedes the last event registered with toapp.UseStageMarker. For example,aftercalling:app.UseStageMarker(PipelineStage.Authorize);calls toapp.UseStageMarkerpassingAuthenticateorPostAuthenticatewill not be honored, and no exception will be thrown. Owin middleware components (OMCs) run at the latest stage, which by default isPreHandlerExecute. The stage markers are used to make them to run earlier. If you specify stage markers out of order, we round to the earlier marker. In other words, adding a stage marker says "Run no later than stage X". OMC's run at the earliest stage marker added after them in the OWIN pipeline.
Given this in my app startup ...app.Use((context, next) => { return next.Invoke(); }).UseStageMarker(PipelineStage.PostAuthenticate); app.Use((context, next) => { return next.Invoke(); }).UseStageMarker(PipelineStage.Authenticate);... why does the PostAuthenticate code execute before the Authenticate code?I don't mean "why does the first app.use get called before the second app.use" I mean: Why does the first invoke get called before the second given that that the second should be happening earlier in the request pipeline?EDITRelated to this problem:How am I getting a windows identity in this code?
Owin Stage Markers
To process multiple items recieved via the pipeline you need a process block in your function:function Test-Function { param ( [Parameter(ValueFromPipeline=$true)] $Test ) process { $Test.GetType().FullName $Test } } [string[]] $Array = "Value 1", "Value 2" $Array | Test-FunctionMore info:get-help about_functionshttp://technet.microsoft.com/en-us/library/dd347712.aspxget-help about_Functions_Advancedhttp://technet.microsoft.com/en-us/library/dd315326.aspx
I'm finding that passing objects to functions through the PowerShell pipeline converts them tostringobjects. If I pass the object as a parameter it keeps its type. To demonstrate:I have the following PowerShell function which displays a object's type and value:function TestFunction { param ( [Parameter( Position=0, Mandatory=$true, ValueFromPipeline=$true )] $InputObject ) Echo $InputObject.GetType().Name Echo $InputObject }I ran this script to test the function:[string[]] $Array = "Value 1", "Value 2" # Result outside of function. Echo $Array.GetType().Name Echo $Array Echo "" # Calling function with parameter. TestFunction $Array Echo "" # Calling function with pipeline. $Array | TestFunctionThis produces the output:String[] Value 1 Value 2 String[] Value 1 Value 2 String Value 2EDIT: How can I use the pipeline to pass an entire array to a function?
Getting different results using the pipeline with functions
I figured what the problem was. OpenMP is the package used in implementing multithreading for spacy pipe() method. This option is disabled for MSVC compiler by default. After I compiled the source code with openmp support it works great. I also made apull requestto enable this on the next releases. So for releases after 0.100.7 (which is the latest version) multithreading with pipe() should work on Windows with no issue.
I'm trying to apply Spacy NLP (Natural Language Processing) pipline to a big text file like Wikipedia Dump. Here is my code based on Spacy'sdocumentationexample:from spacy.en import English input = open("big_file.txt") big_text= input.read() input.close() nlp= English() out = nlp.pipe([unicode(big_text, errors='ignore')], n_threads=-1) doc = out.next()Spacy applies all nlp operations like POS tagging, Lemmatizing and etc all at once. It is like a pipeline for NLP that takes care of everything you need in one step. Applying pipe method tho is supposed to make the process a lot faster by multithreading the expensive parts of the pipeline. But I don't see big improvement in speed and my CPU usage is around 25% (only one of 4 cores working). I also tried to read the file in multiple chuncks and increase the batch of input texts:out = nlp.pipe([part1, part2, ..., part4], n_threads=-1)but still the same performance. Is there anyway to speed up the process? I suspect that OpenMP feature should be enabled compiling Spacy to utilize multi-threading feature. But there is no instructions on how to do it on Windows.
Multi-Threaded NLP with Spacy pipe
Thelwinstruction requires that the memory address be word aligned.Therefore, to access an unaligned word, one would need to access the two word boundaries that the required word intersects and mask out the necessary bytes.For example, suppose you desire to load a word stored at address0x2.0x2is not word aligned, so you would need to load the half word stored at0x2and the half-word stored at0x4.To do so, one might write:lh $t0 2($zero) lh $t1 4($zero) sll $t1 $t1 16 or $t2 $t0 $t1This only gets more complicated if you want to load for example a word stored at address0x3:# load first byte lb $t0 3($zero) # load second word, mask out first 3 bytes lw $t1 4($zero) lui $t2 0x0000FFFF ori $t2 $t2 0xFFFFFFFF or $t1 $t1 $t2 # combine sll $t1 $t1 8 or $t2 $t0 $t1So, it can be seen that the requirement for word alignment doesn'thelpMIPS to be pipelined, but rather that access to unaligned words requires excess memory accesses — this is a limitation of the ISA.
How does 'alignment of memory operands' help MIPS to be pipelined?The book says:Fourth, as discussed in Chapter 2, operands must be aligned in memory. Hence, we need not worry about a single data transfer instruction requiring two data memory accesses; the requested data can be transferred between processor and memory in a single pipeline stage.I think I understand that one data transfer instruction does not require two or more data memory aaccesses. However, I am not sure what does it have to do with the alignment of memory operands.Thanks, in advance!
How does 'alignment of memory operands' help MIPS to be pipelined?
I found the root cause of this behaviour. Jackson 2 API plugin version 2.11.1 is breaking kube deployments; you can find more info by the link below:https://issues.jenkins-ci.org/browse/JENKINS-62995Downgrading the following plugins worked for me: Jackson 2 API v2.10.0,Kubernetes v1.21.3,Kubernetes Client API v4.6.3-1,Kubernetes Continuous Deploy v2.1.2,Kubernetes Credentials v0.5.0As those plugins are default, you would need to find the relevant version source files inhttps://plugins.jenkins.io/, and upload them to your Jenkins server by goingManage Jenkins --> Manage Plugins --> Advanced --> Upload Plugin section
I get the following error when trying to use the YAML file from my GitRepo to deploy to kube cluster.Here is the content of my .yaml file:apiVersion: v1 kind: Service metadata: name: ts-service spec: type: NodePort selector: app: ts ports: - protocol: TCP port: 8080 nodePort: 8080 --- apiVersion: apps/v1 kind: Deployment metadata: name: ts-deployment labels: app: ts spec: replicas: 2 selector: matchLabels: app: ts template: metadata: labels: app: ts spec: containers: - name: ts image: $DOCKER_IMAGE_NAME:$BUILD_NUMBER ports: - containerPort: 8080I've already tried changing the ports, the API version to apps/v1, etc. What seems weird to me is that no matter which line goes foirst in the file, it always shows me the same issue. What can be causing this?
Class not found: io.kubernetes.client.openapi.models.V1Service
There are multiple ways to specify which nodes or parts of your pipeline to run.Usekedro runparameters like--to-nodes/--from-nodes/--nodeto explicitly define what needs to be run.Inkedro>=0.15.2you can define multiple pipelines, and then run only one of them withkedro run --pipeline <name>. If no--pipelineparameter is specified, the default pipeline is run. The default pipeline might combine several other pipelines. More information about using modular pipelines:https://kedro.readthedocs.io/en/latest/04_user_guide/06_pipelines.html#modular-pipelinesUse tags. Tag a small portion of your pipeline with something like "small", and then dokedro run --tag small. Read more here:https://kedro.readthedocs.io/en/latest/04_user_guide/05_nodes.html#tagging-nodes
I have a big pipeline, taking a few hours to run. A small part of it needs to run quite often, how do I run it without triggering the entire pipeline?
How to run parts of your Kedro pipeline conditionally?
export is a shell builtin and xargs expects an actual binary.
Why doesexportfail when used as the last step in a command pipeline?echo FOO=bar | xargs export # => xargs: export: No such file or directoryI can rewrite it this way to accomplish what I want:export `echo FOO=bar`But why can't I useexportin the first way?
Pipe to export command
As per the documentation onstages,If a job does not specify a stage, the job is assigned theteststage.For your question:Is there any way to change this default stage name with Cloud-hosted GitLab?You can define a single stage by using the following:stages: - some-other-nameand then referring to the new stage name (some-other-name) in each of your jobs, because (from the same reference above)if a stage is defined but no jobs use it, the stage is not visible in the pipeline
I've moved my pipeline to a "stageless" one, simply by usingneedsrules and removing allstagedeclarations.This all works fine, but I've noticed that all my jobs now appear under a single stage called "Test".This is not a functional problem, but it does make developers question why it's the case. Is there any way to change this default stage name with Cloud-hosted GitLab?Is it as simple as setting all of the jobs to usestagewith the same value? Seems like a bit of a hack, and contrary to the instructions to "remove allstagekeywords from .gitlab-ci.yml".
GitLab-CI: Why do stageless pipelines show all jobs under stage "test"? Can this be changed?
If you just need to create a single issue, go with thecURLcommand. However, if you require more complex logic, thepython-gitlablibrary can be a useful tool in your arsenal.To create a project issue:import gitlab import os gl = gitlab.Gitlab(os.environ['CI_SERVER_URL'], private_token=os.environ['PRIVATE_TOKEN']) project = gl.projects.get(os.environ['CI_PROJECT_ID']) issue_details = { 'title': f'Validation failed in {os.environ["CI_PROJECT_NAME"]}', 'description': f'Pipeline: {os.environ["CI_PIPELINE_URL"]}', 'assignee_ids': [111, 222] } issue = project.issues.create(issue_details )This assumes you have created amasked variablecalledPRIVATE_TOKEN. The other variables arepre-defined variables.You can add the above code and other logic to a Python script and call it ingitlab-ci.ymllike this:# Use whatever image you need, but make sure it has Python installed image: python:3.7 ... handle-failure: when: on_failure before_script: - pip install -r ./cicd/gitlab/requirements.txt script: - python -m ./cicd/gitlab/create_issue.py
I am planning to run some validations against the pull request in a CI pipeline and, based on the validation results, I wish to automatically create an issue and assign it to developers.Is this achievable in a GitLab pipeline?Thanks!
In GitLab, is it possible to automatically create an issue from a pipeline?
The reason you are seeing strange behavior, is that piping in PowerShell does not work the same way as in the old cmd-interpreter/console. Basically it is parsed by PowerShell before being passed on to the next command. You may read more inthis GitHub issueabout some of the ongoing work to enable two native processes to pass raw byte-streams in PowerShell.If you are unable to just run your command in a legacy cmd-shell (it would probably be the easiest way), you may use the moduleUse-RawPipeline.Install-Module -Name Use-RawPipelineThen you can construct a command line like this (I've simplified the argument list to just to use a file and skip the decklink-source for brevity):Invoke-NativeCommand -FilePath .\ffmpeg -ArgumentList @("-i", "C:\temp\testfile.mp4","-f","h264", "pipe:") | Invoke-NativeCommand -FilePath .\ffplay.exe -ArgumentList @("pipe:") | Receive-RawPipelineIf you need to add more command to the pipeline, just insert anotherInvoke-NativeCommand -FilePath <command>before theReceive-RawPipeline.Note: I am not sure about the performance of this solution, but I have confirmed it will pipe the output of ffmpeg to ffplay and display the encoded stream.
I have switched my current video project from command prompt to PowerShell so that I can take full advantage of theTee-Objectfor a multi output code.Currently, I have a version of my code working in batch, but I need to add one more feature through a tee. This is my first time using PowerShell, so this is probably a simple fix...Currently I have figured out how to runffmpegandffplayin PowerShell, and I have a program in batch which takes anffmpegoutput and pipes it toffplay, and this works just fine. I can also play throughffplayin PS just fine. In PS, this code works:ffplay -video_size 1280x720 -pixel_format uyvy422 -framerate 60 -i video="Decklink Video Capture"And as a batch, this code works fine for what I'm doing:ffmpeg -video_size 1280x720 -pixel_format uyvy422 -framerate 60 -i video="Decklink Video Capture" -c:v libx265 -preset ultrafast -mpegts pipe: | ffplay pipe:It takes the video I want and plays it through the pipe to the screen. When played through PowerShell, though, the video doesn't even pop up. I don't get any warnings or anything, and it seems to run fine, but I don't get the picture to display.End goal is to be able to play the video on the host display in full resolution, and publish a lower bitrate to the network, if that helps.
Using ffmpeg and ffplay piped together in PowerShell
The History callback records training metrics for each epoch. This includes the loss and the accuracy (for classification problems) as well as the loss and accuracy for the validation dataset, if one is set.The history object is returned from calls to thefit()function used to train the model. Metrics are stored in a dictionary in the history member of the object returned.This also means that the values have to be in the scope of thefit()function or the sequential model, so if it is in a sklearn pipeline, it doesn't have access to the final values, and it can't store, or return what it can't see.As of right now I an not aware of a history callback in sklearn so the only I see for you is to manually record the metrics you want to track. One way to do so would be to have pipeline return the data and then simply fit your model onto it. If you are not able to figure that out comment.
Kerasmodels, when.fitis called, return a history object. Is it possible to retrieve it if I use this model as one step of a sklearn pipeline? btw, i'm using python 3.6Thanks in advance!
sklearn pipeline + keras sequential model - how to get history?
This might depend on your OS/distribution and GStreamer version.Over here (Debian jessie, GStreamer 0.10.36)gst-inspect ffdec_h264gives the following output:Factory Details: Long name: FFmpeg H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 decoder Class: Codec/Decoder/Video Description: FFmpeg h264 decoder Author(s): Wim Taymans <[email protected]>, Ronald Bultje <[email protected]>, Edwar$ Rank: primary (256) Plugin Details: Name: ffmpeg Description: All FFmpeg codecs and formats (system install) Filename: /usr/lib/x86_64-linux-gnu/gstreamer-0.10/libgstffmpeg.so Version: 0.10.13 License: GPL Source module: gst-ffmpeg Binary package: FFmpeg Origin URL: http://ffmpeg.org/So on my system,ffdec_h264is in thegst-ffmpegmodule (which was installed usingapt-get install gstreamer0.10-ffmpeg).
I am running this script to view cameras on network:gst-launch udpsrc port=1234 ! "application/x-rtp, payload=127" ! rtph264depay ! ffdec_h264 ! xvimagesink sync=falseI am getting this error:WARNING: erroneous pipeline: no element "ffdec_h264"I am getting error withffdec_h264. I have all the packages from g-streamer but I don't know which one I am missing. when I rungst-inspect | grep 264I get this output:h264parse: legacyh264parse: H264Parse x264: x264enc: x264enc videoparsersbad: h264parse: H.264 parser typefindfunctions: video/x-h264: h264, x264, 264 rtp: rtph264pay: RTP H264 payloader rtp: rtph264depay: RTP H264 depayloaderWhich shows I don't have thisffdec_h264which package I am missing?
Gstreamer ffdec_h264 missing
Look at "memory" parameter. It caches transformers from a pipeline.https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.make_pipeline.htmlpipeline = make_pipeline( TfidfVectorizer(), LinearRegression(), memory='cache_directory' )
Consider having following sklearnPipeline:pipeline = make_pipeline( TfidfVectorizer(), LinearRegression() )I haveTfidfVectorizerpretrained, so when I am callingpipeline.fit(X, y)I want onlyLinearRegressionto be fitted and I don't want to refitTfidfVectorizer.I am able to just apply transformation in advance and fitLinearRegressionon transformed data, but in my project I have a lot of transformers in a pipeline, where some of them are pretrained and some aren't, so I am searching for a way of not writing another wrapper around sklearn estimators and stay in a bounds of onePipelineobject.To my mind, it should be a parameter in the estimators object that stands for not refitting object when calling.fit()if object is already fitted.
Is it possible to fit separate parts of an sklearn pipeline?
Finally I resolved my problem passing from Build to Build + Release.The solution of @Fairoz did not work for me because it is still allowing multiple builds of different branches to run at same time. What I need is to lock a pipeline until there are no builds running. The way to do this on Azure DevOps its with Capabilities + Demands, but doesn't work with Microsoft-hosted agents, so what I did wasmove all my tests logic to an Releasethat deploys to CI and leave the build that just makes an Artifact.So the workflow is:PR to GitHub > Trigger Build > Generate an artifact with my branch > Trigger release > Release code to CI and run the testsAzure DevOps is allowed to control how many parallel executions of one Stage can exists, so I limited to 1 the number of parallel tasks and that's it
I've two different pipelines with 3 enabled agents. My problem is that one of my pipelinesfails if has multiple builds running at same time(because tets are running into conflict), so I want to queue an build request if there is another one running for this specific pipeline.The first thing i've tried it's use capabilities and demands to identify one agent, so always has only one agent available to this pipeline, but it doesn't work withMicrosoft-hosted agentsand that it's what i have. Then I thought that maybe creating 2 agent pools I can specify on my pipeline config to use one of them, but one more time I can't create multiple pools forMicrosoft-hosted agentsHow can I prevent multiple builds running at same time?Thanks!
Azure Devops - How to prevent multiple pipelines run at same time?
Refer Following Document for configuration of jenkins with fastlane.https://docs.fastlane.tools/actions/upload_to_app_store/#jenkins-integration
i'm working on iOS project which have continous intergration set up, i wanted to create a jenkins pipeline for my project to run automation steps to do build,test and etc operations. For the automation process i'm using Fastlane tool, so how can i sync up the jenkins pipeline with my Fastlane commands in it? I got few examples related to maven commands in pipeline file, as maven plugin option is already available in jenkins, similary how can i achieve the same for fastlane. I need few examples to write my declarative pipeline syntax in my xcode project jenkins file.Also i would like to know should the jenkinspipe line file should be inside the xcode project or it should be under the master branch ?Any help is appreciated. Thanks.
Can configure Jenkins pipeline with fastlane commands in xcode project file
In yourapp/assets/javascripts/application.jsfile, try putting://= require_tree ../TypefacesSee more:http://guides.rubyonrails.org/asset_pipeline.htmlLet me know if that works.
I have a folder in my asset pipeline called typefaces. It works without any additions toapplication.rb.In the directory I have different typeface types, like .eof, .ttf, etc in folders, like thisAssets Typefaces Eof ...files Ttf ...filesUnless the typefaces are in Assets/typefaces they don't become part of asset pipeline. Asset pipeline doesn't go into the subdirectories.How would I have asset pipeline look beyond assets/typefaces into assets/typefaces/eof, assets/typefaces/ttf etc?
Require tree in asset pipeline
My own attempt to solve this resulted in parsing git status. Seems cleaner and easier to implement. On the other hand i'am looking forword to create an special crafted XML File to get the needed Information in a more "clean" Way.
I'd like to integrate git into production pipeline to stage 3dsmax files. While it is alright to work with git through TortoiseGit, I'd like to communicate with it from the Maxscript to add custom menu commands to 3dsmax.Should I parsegit statusoutput text to determine folder status or should I use some wrapping tool to correctly communicate with git?I was thinking aboutgitsharpsince it is easy to call dotNet objects from Maxscript, but I didn't use external dotNet programs.
Should I parse git status or use gitsharp?
Try like this:$objcol = @() ForEach ($objItem in $colItems) { $obj = New-Object System.Object $obj | Add-Member -MemberType NoteProperty -Name ComputerName -Value $ComputerName $obj | Add-Member -MemberType NoteProperty -Name MacAddress -Value $objItem.MacAddress $obj | Add-Member -MemberType NoteProperty -Name IPAdress -Value $objitem.IpAddress $objcol += $obj } Write-Output $objcol
Why does the script below come up with the following error?"Add-Member : Cannot process command because of one or more missingmandatory parameters: InputObject.+ $obj = Add-Member <<<< -MemberType NoteProperty -Name ComputerName -Value $ComputerName+ CategoryInfo : InvalidArgument: (:) [Add-Member], ParameterBindingException+ FullyQualifiedErrorId : MissingMandatoryParameter,Microsoft.PowerShell.Commands.AddMemberCommand"Script# Receives the computer name and stores the required results in $obj. Function WorkerNetworkAdaptMacAddress { Param($ComputerName) $colItems = GWMI -cl "Win32_NetworkAdapterConfiguration" -name "root\CimV2" -comp $ComputerName -filter "IpEnabled = TRUE" $obj = New-Object -TypeName PSobject ForEach ($objItem in $colItems) { $obj = Add-Member -MemberType NoteProperty -Name ComputerName -Value $ComputerName $obj = Add-Member -MemberType NoteProperty -Name MacAddress -Value $objItem.MacAddress $obj = Add-Member -MemberType NoteProperty -Name IPAdress -Value $objitem.IpAddress } Write-Output $obj } # Receives the computer name and passes it to WorkerNetworkAdaptMacAddress. Function Get-NetworkAdaptMacAddress { begin {} process{ WorkerNetworkAdaptMacAddress -computername $_ } end {} } # Passes a computer name to get-networkAdaptMacAddress 'tbh00363' | Get-NetworkAdaptMacAddress
Using PowerShell's Add-Member results in an error
findalready has a delete function, so no pipes are necessary:find . -iname thumbs.db -deleteThis says delete all files matchingthumbs.dbregardless of capitalization, recursively from my current working directory.
I have a laptop installed with Ubuntu 10.04. I migrated some of my files from one computer to this computer. But there are some files like Thumbs.db file whose every occurrence I want to get rid of.I tried usinglocate Thumbs.db | rmBut dis didn't worked out (and clearly it should not). Then I tried using following, but quite expectedly none of them worked out :locate thumbs.db > rm locate thumbs.db < rmAs everyone here, might have pointed out that I am having a hard time using pipeline and want to just clear my concept using this as an example. I have read the basics but still not able to intitutively able to apply it.
Using pipes to delete all occurences of Thumbs.db file from Ubuntu Laptop
You can simply define it like this:let (|>>) e a = e |> addAttr a
I'm working on a project and I want to create a really compact method for creating Entities and Attributes.I want to do this with the pipeline operator. But I want to add extra functionality to this operator.Like for example :let entity = (entity "name") |>> (attribute "attr" String) |>> (attribute "two" String)In this example |>> would be a pipeline operator together with the functionality to add an attribute to the entity.I know that this works:let entity = (entity "name") |> addAttr (attribute "attr" String)So what I want to know is, if it's possible to replace|> addAttrwith|>>Thanks for the help(I don't know if this is even possible)
Combine functionality and Pipeline operator in F#
There is an easier way, simply call theproceedEmptyURL for the jobs:curl -X POST -H "Jenkins-Crumb:${JENKINS_CRUMB}" http://${JENKINS_URL}/job/${JOB_NAME}/${BUILD_ID}/input/${INPUT_ID}/proceedEmptyThere is no need to pass in body data.To abort, use:curl -X POST -H "Jenkins-Crumb:${JENKINS_CRUMB}" http://${JENKINS_URL}/job/${JOB_NAME}/${BUILD_ID}/input/${INPUT_ID}/abort
I have Jenkins pipeline with an Input step, and I would like to submit this input(single string argument) via a script. So far I am trying with curl, ideally I'll be sending it via Python requests library. This should be an easy POST request, however with CSRF it becomes tricky. I've obtained Jenkins-Crumb (using curl in this case, from the same machine and same bash session), but still can't send the content...I'm sendingJenkins-Crumb:XXXheader, just like it is explained athttps://wiki.jenkins-ci.org/display/JENKINS/Remote+access+APImy request looks like this:curl -vvv -X POST --user '${USER}:${API_KEY}' -H "Jenkins-Crumb:${JENKINS_CRUMB}" -d 'json="{"parameter":{"name":"${PARAM_NAME}","value":"asd"},"Jenkins-Crumb":"${JENKINS_CRUMB}"}"' 'http://${JENKINS_URL}/job/${JOB_NAME}/${BUILD_NR}/input/'The URL I'm POSTing at is the same, as the one linked in build log (Console output).
Jenkins input pipeline step filled via POST with CSRF - howto?
You can pipe the output and use foreach to check each line, if the line equals a certain string you can usebreakto stop the command:Get-Content path\to\text\file\to\be\read.txt -wait | % {$_ ; if($_ -eq "yourkeyword") {break}}For example, if you run the command above and do the following in another shell:"yourkeyword" | Out-File path\to\text\file\to\be\read.txt -AppendTheGet-Contentdisplays the new line and stops.Explanation:| %- foreach line that is in the file or gets added to the file while the function runs$_;- first write out the line of the foreach then do another commandif($_ -eq "yourkeyword") {break}- if the line of the foreach is equal to the keyword/line you want, stop theGet-Content
I am using this below script in Powershell (Version is 5.1):Get-Content -Path path\to\text\file\to\be\read.txt -WaitNow this continues to read even after the file is getting no update. How can I stop after I find particular line in the text file? Is there any other way to stop this on condition?
How to stop Get-Content -wait after I find a particular line in the text file using powershell? Exit a pipeline on demand
Found 2 working solutions. The more elegant one is to add the following as part of the pipeline script.- export DOCKER_BUILDKIT=0Source:https://community.atlassian.com/t5/Bitbucket-discussions/Bitbucket-pipelines-authorization-denied-by-plugin-pipelines/td-p/2147800#M3907
I am currently trying to build a bitbucket pipeline which is supposed to run a docker-compose file to test a microservice before deployment. The docker compose file is supposed to build my microservice image and run it.This all seems to work fine locally, however, when I move things to the pipeline I constantly keep getting this error:#1 [internal] booting buildkit #1 pulling image moby/buildkit:buildx-stable-1 #1 pulling image moby/buildkit:buildx-stable-1 2.4s done #1 creating container buildx_buildkit_default 0.0s done #1 ERROR: Error response from daemon: authorization denied by plugin pipelines: --privileged=true is not allowed ------ > [internal] booting buildkit: ------ Error response from daemon: authorization denied by plugin pipelines: --privileged=true is not allowedDockerfileFROM node:12-alpine WORKDIR /app/playground RUN npm install[email protected]RUN rm -rf /usr/local/lib/node_modules/npm RUN mv node_modules/npm /usr/local/lib/node_modules/npm COPY package.json package-lock.json ./ RUN npm ci COPY . . CMD [ "npm", "run", "start" ]docker-compose.ymlversion: "3" services: playground: build: . ports: - 9111:9111 env_file: - ./envs/test.envI understand that bitbucket pipelines have some mechanism to prevent certain operations from executing for security reasons, but as far as I am aware I am not doing that here.Any idea of how I could possibly fix this error?
Bitbucket pipelines authorization denied by plugin pipelines
My habit is to comment out (#) the pipe leading to the next command, then run the code (Macbookcmd+enteror Windowsctrl+enter).After checking the results, simply remove the comment character (#) and go on.library(tidyverse) result = mtcars |> filter(cyl == 4) #|> <- run the code here, the rest would be ignored group_by(gear) |> summarise()This would still require a few clicks to remove the comment character, would love to see other's approach.
When coding, I often want to check the intermediate results of the pipeline I'm working on. If I'm working on the early parts of a long pipeline, it requires quite a few clicks/mouse to run that selectively and to save the outcome. Is there a neat way to do something like the following?library(dplyr) result = mtcars |> # Testing this step filter(cyl == 4) |> return_early() |> # I don't want to run the rest of the pipeline group_by(gear) |> summarise()so that after executing,resultwill hold the result atreturn_early()without executing the rest of the pipeline?Note that I'm asking about a convenient way tosavethe intermediate output and stop the evaluation. If you're interested inprinting, seehereandhere.
Return dplyr pipeline result early
Just get the model from stages:lrModel.stages[-1].summaryIf model is earlier in the Pipeline replace -1 with its index.
I've estimated a logistic regression using pipelines.My last few lines before fitting the logistic regression:from pyspark.ml.feature import VectorAssembler from pyspark.ml.classification import LogisticRegression lr = LogisticRegression(featuresCol="lr_features", labelCol = "targetvar") # create assember to include encoded features lr_assembler = VectorAssembler(inputCols= numericColumns + [categoricalCol + "ClassVec" for categoricalCol in categoricalColumns], outputCol = "lr_features") from pyspark.ml.classification import LogisticRegression from pyspark.ml import Pipeline # Model definition: lr = LogisticRegression(featuresCol = "lr_features", labelCol = "targetvar") # Pipeline definition: lr_pipeline = Pipeline(stages = indexStages + encodeStages +[lr_assembler, lr]) # Fit the logistic regression model: lrModel = lr_pipeline.fit(train_train)And then I tried to run the summary of the model. However, the code line below:trainingSummary = lrModel.summaryresults in: 'PipelineModel' object has no attribute 'summary'Any advice on how one could extract the summary information that is usually contained in regression's model from a pipeline model?Thanks a lot!
Spark: Extracting summary for a ML logistic regression model from a pipeline model
Asked the same question in Zookeeper mail list, and got this:Most probably you are using the wrong "nc" command.Not kidding :P there are two different "nc" packages, and the syntax is different betweem then. In debian-like distros they are netcat-openbsd and netcat-traditional, but I ran into the same problems with netcat in CentOS (I can't remember the name of the packages, sorry) until I realized I was using it wrong.--Tomas NunezI found that the nc on my server is nc.openbsd, after install nc.traditional,echo "stat" | nc.traditional 10.18.10.30 2181returns the expected result.
I try to get zookeeper stat from shell by usingnc,callnc localhost 2181first, then type in:statworks.whileecho "stat" | nc localhost 2181returns nothing.why?
why pipeline content to command nc won't work?
This is (unfortunately!) not supported in the F# language - while you can come up with various fancy functions and operators to emulate the behavior, I think it is usually just easier to refactor your code so that the call is outside of the pipeline. Then you can write:let input = 2 let result = func 5 input 6The strength of a pipeline is when you have one "main" data structure that is processed through a sequence of steps (like list processed through a sequence ofList.xyzfunctions). In that case, pipeline makes the code nicer and readable.However, if you have function that takes multiple inputs and no "main" input (last argument that would work with pipelines), then it is actually more readable to use a temporary variable and ordinary function calls.
I have googlet a bit, and I haven't found what I was looking for. As expected. My question is, is it possible to define a F# pipeline placeholder? What I want is something like _ in the following:let func a b c = 2*a + 3*b + c 2 |> func 5 _ 6Which would evaluate to 22 (2*5 + 3*2 + 6).For comparison, check out themagrittrR package:https://github.com/smbache/magrittr
F# pipeline placeholder?
You could tryparallel: matrix: - PHP_VERSION: [ '7.4', '8.0', '8.1' ] rules: - if: '$CI_PIPELINE_SOURCE == "merge_request_event"' when: on_success - if: '$PHP_VERSION == "8.1"' when: on_success allow_failure: true - if: '$PHP_VERSION' when: on_success allow_failure: falseBased onhttps://docs.gitlab.com/ee/ci/jobs/job_control.html#run-a-matrix-of-parallel-trigger-jobsExample of execution
I need configure my GitLab CI job like this:Only job with$CI_PIPELINE_SOURCE == "merge_request_event"is added to Pipeline,Job is runned multipletimes bymatrixfor each version defined by matrixPHP_VERSION: [ '7.4', '8.0', '8.1' ].The'8.1'must but runned withallow_failure: true.I tried to write rulesintuitiveas I except rules works, but I'm getting a different result.I first tried this:parallel: matrix: - PHP_VERSION: [ '7.4', '8.0', '8.1' ] rules: - if: '$CI_PIPELINE_SOURCE == "merge_request_event"' when: on_success - if: '$PHP_VERSION == "8.1"' allow_failure: trueIt result only to MR event for PHP 8.1 us added to Pipeline.My next iteration is still wrong:parallel: matrix: - PHP_VERSION: [ '7.4', '8.0', '8.1' ] rules: - if: '$CI_PIPELINE_SOURCE == "merge_request_event"' when: on_success - if: '$PHP_VERSION == "8.1"' when: on_success allow_failure: true - when: on_successThis looks better, but it runs job for every other event (not onlymerge_request_event).How I can to right combine rules to get result as I declared above? 🙏
How to combine GitLab CI job for rule:if with matrix and allow_failure?
Example dataset:import numpy as np from sklearn.preprocessing import FunctionTransformer from sklearn.pipeline import Pipeline import pandas as pd X = pd.DataFrame({'Col1':[1,2],'Col2':[3,4],'Col3':[5,6]})Your function:def return_selected_dataset(dataset, columns): return dataset[columns]Without the pipeline, it would be like:FunctionTransformer(return_selected_dataset, kw_args={'columns':['Col1','Col2']}).transform(X)Note with pipeline, you can only pass parameters to each of your fit steps, seethe help page:**fit_paramsdict of string -> object Parameters passed to the fit method of each step, where each parameter name is prefixed such that parameter p for step s has key s__p.So I think what you can do is:pipe = Pipeline([ ('Return_Col', FunctionTransformer(return_selected_dataset, kw_args={'columns':['Col1','Col2']}) ) ]) pipe.fit_transform(X) Col1 Col2 0 1 3 1 2 4
I have been learning about sklearn preprocessing and pipelines and come across the concept of FunctionTransformer. I want to understand if one has to integrate it in a pipeline and pass arguments to a function which FunctionTransformer is referring to, how would that be done. Consider the example below, for simplicity, i have written a small function:def return_selected_dataset(dataset, columns): return dataset[columns] pipe = Pipeline([('Return_Col', FunctionTransformer(return_selected_dataset))]) pipe.fit_transform(dataset, columns = ['Col1', 'Col2'])I am getting the following error:ValueError: Pipeline.fit does not accept the columns parameter. You can pass parameters to specific steps of your pipeline using the stepname__parameter format, e.g. `Pipeline.fit(X, y, logisticregression__sample_weight=sample_weight)`.How can I pass the value ofcolumnsto the function? Also, can someone suggest any book or website where I can study the sklearn pipelines and preprocessing in detail and how to customize these processes?
Pass arguments to FunctionTransformer in Pipeline
Apparently, the solution is pretty simple, just needed to add awhen: manualparamater to the job:echo: stage: echo script: - echo 'this is a manual job' when: manualOnce that's done, the job can be triggered independently right here:
I would like to know how to manually trigger specific jobs in a project's CI pipeline. Since there is only one gitlab-ci.yml file, I can define many jobs to be executed one after the other sequentially. But what if I want to start a manual CI pipeline that only carries out one job? As I understand it, every time the pipeline will run, it will runalljobs, unless I use manyonlyand similar parameters. For instance, when I have this simple pipeline config:stages: - build build: stage: build script: - npm i - npm run build - echo "successful build"What do I do if I want to only run anechojob that runs a simpleecho "hello"script, but doesonlythat andonlywhen I manually run it? There are no 'triggers' for a job like that afaik. Is this even a possibility?Thanks for the clarification!
How can I create manually-run GitLab pipeline jobs?
This can be done using the rawBuild state.import hudson.model.Result currentBuild.rawBuild.@result = hudson.model.Result.SUCCESSFound the answer from this question.How to manipulate the build result of a Jenkins pipeline job?
This question already has answers here:How to manipulate the build result of a Jenkins pipeline job (back to 'SUCCESS')?(5 answers)Closed5 years ago.I'm using the env variable"currentBuild.result"to modify the overall job status of a Jenkins job.I can set it to a failure usingcurrentBuild.result = 'FAILURE'and I can set it to Aborted usingcurrentBuild.result = 'ABORTED'but I cannot clear these back to success usingcurrentBuild.result = 'SUCCESS'This is driving me nuts, Any idea what I'm doing wrong here and any pointers on how to set the overall job status to Success after they have been set to some other state?Appreciate any pointers in advance!
Cannot Set the Job status back to Success in Jenkins Pipeline [duplicate]
I guess what you want to achive is done like this.List<Response> responses = new ArrayList<>(); Pipeline p = jedis.pipelined(); for (int id: ids) { responses .add(p.get(id)); } p.sync(); for(Reponse response : responses){ Object o = response.get(); }
I have a list of ids that I want to use to retrieve hashes from a Redis server using the java client jedis. As mentioned in the documentation, Jedis provides a way to use the pipeline by declaring Response objects and then sync the pipeline to get values:Pipeline p = jedis.pipelined(); p.set("fool", "bar"); p.zadd("foo", 1, "barowitch"); p.zadd("foo", 0, "barinsky"); p.zadd("foo", 0, "barikoviev"); Response<String> pipeString = p.get("fool"); Response<Set<String>> sose = p.zrange("foo", 0, -1); p.sync();However, my list has a variable length that keeps changing every few minutes. Thus, I am not able to predict the number of Response objects that I need to declare. Is there a way to get around that, something like:Pipeline p = jedis.pipelined(); Response<List<List<Map<String,String>>> records; for (int id: ids) records.add(p.hgetAll(id)) p.sync();
Getting values with jedis pipeline
The approach to create a pipeline in a runspace e.g.:var pipeline = runspace.CreatePipeline();is 1.0 thing. That is, the original PowerShell hosting API required you to create the pipeline through the runspace that you created. My guess is that the team got feedback that the hosting API needed to be simplified so they came up with thePowerShellclass for the 2.0 release.If you're interested in the nitty gritty details of what is different, go grab dotPeek and crack open the System.Management.Automation.dll and peruse it. One difference is that PowerShell.Invoke() has to determine the type of runspace in use in order to determine which type of pipeline to create e.g. LocalPipeline or RemotePipeline. When you use a Runspace, you actually create a derived class (LocalRunspace or RemoteRunspace) and each one of those will create the appropriate type of pipeline.
I am using powershell commands to execute scripts and cmdlets. So while executing cmdlets I used powershell.invoke and while executing a script I used pipeline.invoke method. I wanted to know if there is any difference between theSystem.Management.Automation.pipeline.invoke()method andSystem.Management.Automation.Runspaces.powershell.invoke()method.
What is the difference between pipeline.invoke and powershell.invoke?
support for this feature has been added in Spark 2.3.0.See release docsMultiple column support for several feature transformers:[SPARK-13030]: OneHotEncoderEstimator (Scala/Java/Python)[SPARK-22397]: QuantileDiscretizer (Scala/Java)[SPARK-20542]: Bucketizer (Scala/Java/Python)You can now usesetInputColsandsetOutputColsto specify multiple columns, although it seems not to be yet reflected in the official docs. The performance has been greatly increased with this new patch when compared to dealing with each column one job at a time.Your example may be adapted as follows:import org.apache.spark.ml.feature.QuantileDiscretizer import org.apache.spark.sql.types.{StructType,StructField,DoubleType} import org.apache.spark.ml.Pipeline import org.apache.spark.rdd.RDD import org.apache.spark.sql._ import scala.util.Random val nRows = 10000 val nCols = 1000 val data = sc.parallelize(0 to nRows-1).map { _ => Row.fromSeq(Seq.fill(nCols)(Random.nextDouble)) } val schema = StructType((0 to nCols-1).map { i => StructField("C" + i, DoubleType, true) } ) val df = spark.createDataFrame(data, schema) df.cache() //Get continuous feature name and discretize them val continuous = df.dtypes.filter(_._2 == "DoubleType").map (_._1) val discretizer = new QuantileDiscretizer() .setInputCols(continuous) .setOutputCols(continuous.map(c => s"${c}_disc")) .setNumBuckets(3) val pipeline = new Pipeline().setStages(Array(discretizer)) val model = pipeline.fit(df) model.transform(df)
All,I have a ml pipeline setup as belowimport org.apache.spark.ml.feature.QuantileDiscretizer import org.apache.spark.sql.types.{StructType,StructField,DoubleType} import org.apache.spark.ml.Pipeline import org.apache.spark.rdd.RDD import org.apache.spark.sql._ import scala.util.Random val nRows = 10000 val nCols = 1000 val data = sc.parallelize(0 to nRows-1).map { _ => Row.fromSeq(Seq.fill(nCols)(Random.nextDouble)) } val schema = StructType((0 to nCols-1).map { i => StructField("C" + i, DoubleType, true) } ) val df = spark.createDataFrame(data, schema) df.cache() //Get continuous feature name and discretize them val continuous = df.dtypes.filter(_._2 == "DoubleType").map (_._1) val discretizers = continuous.map(c => new QuantileDiscretizer().setInputCol(c).setOutputCol(s"${c}_disc").setNumBuckets(3).fit(df)) val pipeline = new Pipeline().setStages(discretizers) val model = pipeline.fit(df)When i run this, spark seems to setup each discretizer as a separate job. Is there a way to run all the discretizers as a single job with or without a pipeline? Thanks for the help, appreciate it.
How to use spark quantilediscretizer on multiple columns
You can try the Jenkins FLOW plugin...https://wiki.jenkins-ci.org/display/JENKINS/FLOW+Plugin
I've been using build pipeline plugin with Jenkins (v1.534) for a long time now and recently I've tried to create a pipeline with the same job (using different parameters) twice and it seems not possible. It looks like this:Job A (param env=dev) -> Job B -> Job A (param env=qa)Is this possible using build pipeline plugin (v1.4)?
Parametrized job using build pipeline plugin on Jenkins
What are you looking for is adataflowframework. Pipeline is a specialized form of dataflow, where all components have 1 consumer and 1 producer.Boost supports dataflow, but unfortunatelly, I'm not familiar with Boost. Here's the link:http://dancinghacker.com/code/dataflow/dataflow/introduction/dataflow.htmlAnyway, you should write your components as separate programs and use Unix pipes. Especially, if your data characteristic is (or can be easily transform into) lines/text.Also an option is to write your own dataflow thing. It's not too hard, especially, when you have restrictions (I mean pipe: 1-consumer/1-producer), you should not implement a full dataflow framework. Piping is just about binding some kind of functions together, passing one's result into next one's arg. A dataflow framework is about a component interface/pattern and a bind technique. (It's fun, I've written one.)
I have been searching for a re-usable execution pipeline library in C++ (job scheduler library?). I could not find anything withinBoost. So I eventually found out two candidates:google-concurrency-librarylibpipelineAm I missing any other candidates ? Has anyone used them ? How good are they with regard to parallel io and multithreading ? Those libraries still seems to be missing dependencies handling. For instance it does not seems clear to me how one would write something like:$ cat /dev/urandom | tr P Q | head -3In this very simple case, pipeline is walked bottom up, and the firstcatstops executing whenheadprocess stops pulling.However I do not see how I can benefit from multi-threading and or parallel io in case such as:$ cat /raid1/file1 /raid2/file2 | tr P Q > /tmp/file3There is no way for me to say: executetron 7 threads when 8 processors available.
C++ library to build up execution pipeline
When you use a file variable in the variables section in a .gitlab-ci.yml file, the variable is expanded to contain the content instead of the file name. This is a bug in GitLab, and something that they might fix in an upcoming release. Here is the issue on GitLab for it:https://gitlab.com/gitlab-org/gitlab/issues/29407They have marked it withP2, which means that they will try to fix it in 60 days from tagging it. They seem to have missed the deadline for this month though.In the mean time, you might have to just use the variable manually where needed. If you have a huge .gitlab-ci.yml file, you might be able to useyaml anchorsor theextends keywordto reuse part of your script without having to depend on the variable expansion.
I am trying to set-up an pipline for building and testing our projects. A have set up to file Variables on group level to use inside the pipelineVariables for mvn settings and certificateThe Problem is, that the mvn_settings file is resolved as text and not as path. So my build fails.$ mvn $MAVEN_CLI_OPTS compile Unable to parse command line options: Unrecognized option: --><!--If I "echo"$mvn_settingsI get the path. Also when im hard coding the path the pipeline is working My pipline:variables: ... MAVEN_CLI_OPTS: "--batch-mode -s '$mvn_settings'" ... before_script: - keytool -importcert -file "$db_trust" -keystore $JAVA_HOME/jre/lib/security/cacerts -storepass changeit -trustcacerts -noprompt generate: stage: generate script: - mvn $MAVEN_CLI_OPTS compile artifacts: paths: - target/ expire_in: 3 days ...Is there any way to determine when the pipeline is using the content and when the path?
Gitlab Ci File Variable content instead of path
First of all I believe you're mixing a bit two different things:Estimators- which represent stages that can befit-ted.Estimatorfitmethod takesDatasetand returnsTransformer(model).Transformers- which represent stages that cantransformdata.When youfitPipelineitfitsallEstimatorsand returnsPipelineModel.PipelineModelcantransformdata sequentially callingtransformon allTransformersin the the model.how should I transfer the fitted valuesThere is no single answer to this question. In general you have two options:Pass parameters of the fitted model as the arguments of theTransformer.Make parameters of the fitted modelParamsof theTransformer.The first approach is typically used by the built-inTransformer, but the second one should work in some simple cases.how to handle persistenceIfTransformeris defined only by itsParamsyou can extendDefaultParamsReadable.If you use more complex arguments you should extendMLWritableand implementMLWriterthat makes sense for your data. There are multiple examples in Spark source which show how to implement data and metadata reading / writing.If you're looking for an easy to comprehend example take a look a theCountVectorizer(Model)where:EstimatorandTransformershare commonParams.Model vocabularyis a constructor argument, modelparameters are inherited from the parent.Metadata (parameters) iswrittenanreadusingDefaultParamsWriter/DefaultParamsReader.Custom implementation handles data (vocabulary)writingandreading.
I want to develop a custom estimator for spark which handles persistence of the great pipeline API as well. But asHow to Roll a Custom Estimator in PySpark mllibput it there is not a lot of documentation out there (yet).I have some data cleansing code written in spark and would like to wrap it in a custom estimator. Some na-substitutions, column deletions, filtering and basic feature generation are included (e.g. birthdate to age).transformSchema will use the case class of the datasetScalaReflection.schemaFor[MyClass].dataType.asInstanceOf[StructType]fit will only fit e.g. mean age as na. substitutesWhat is still pretty unclear to me:transformin the custom pipeline model will be used to transform the "fitted" Estimator on new data. Is this correct? If yes how should I transfer the fitted values e.g. the mean age from above into the model?how to handle persistence? I found some genericloadImplmethod within private spark components but am unsure how to transfer my own parameters e.g. the mean age into theMLReader/MLWriterwhich are used for serialization.It would be great if you could help me with a custom estimator - especially with the persistence part.
Spark custom estimator including persistence
Fromsection 4.1.2 in the canonical Rails Guides:When files are precompiled, Sprockets also creates a gzipped (.gz) version of your assets.To precompile your assets, use the bundled rake task:# from command line RAILS_ENV=production bundle exec rake assets:precompileUPDATE:After some research into the subject, I've allegorically found that, while Sprockets compresses JS and CSS assets, it doesnotcompressimages. Then I came across this gem:sprockets-image_compressorI haven't implemented it myself, but it claims to provide lossless compression of image assets usingpngcrushandjpegoptim. Interestingly, the docs state the following:If the environment doesn't have pngcrush and/or jpegoptim installed, the gem will fall back on binaries packaged with the gem.Again, I haven't used this myself, but if it does what it claims, it might be exactly what you're looking for.
How can I get the Rails asset pipeline to Gzip compress images? It compresses css and js files but not images.EDITRewritten question. Initially this was about subfolders but it seems Rails isn't compressing any images.
How to make the Rails asset pipeline Gzip images
The best solution to this problem is to create a node in JenkinsStep 1 − Go to the Manage Jenkins section and scroll down to the section of Manage Nodes.Step 2 − Click on New NodeStep 3 − Give a name for the node, choose the Dumb slave option and click on Ok.Step 4 − Enter the details of the node slave machine. In the below example, we are considering the slave machine to be a windows machine, hence the option of “Let Jenkins control this Windows slave as a Windows service” was chosen as the launch method. We also need to add the necessary details of the slave node such as the node name and the login credentials for the node machine. Click the Save button. The Labels for which the name is entered as “New_Slave” is what can be used to configure jobs to use this slave machine.Once the above steps are completed, the new node machine will initially be in an offline state, but will come online if all the settings in the previous screen were entered correctly. One can at any time make the node slave machine as offline if required.In my Jenkins pipelinenode("build_slave"){ sh 'docker exec -it $(docker ps | grep unique_text | cut -c1-10) bash deploy.sh' }
I want to connect to a remote running Docker container directly with ssh. Normally I can$ ssh -i privateKey user@host $ docker ps #which will list all running containers $ docker exec -it ***** bash deploy.sh # ***** is container id and this line run a deployment scriptBut I need to run this script from a Jenkins pipeline where I have only one chance. After many trying, I come up with this$ ssh -tt -i ~/privateKey user@host docker exec -it $(docker ps | grep unique_text | cut -c1-10) /bin/bash deploy.shWhich have not help my plight because it returns"docker exec" requires at least 2 arguments.Which actually mean the command is truncated here$(docker ps | grep ...My Solutionsh 'ssh -tt -i $FILE -o StrictHostKeyChecking=no $USER@$HOST /bin/bash -c \'"docker exec -it $(docker ps | grep unique_text | cut -c1-10) bash start.sh"\''
How to connect directly to a remote docker container with ssh
As Charles Duffy said, to execute a binary, you'd have to tell your operating system (which seems to be a Unix variant) to load and execute something – and Unix systems only take files to execute them directly.What you could do is have a process that prepares a memory region containing the ELF binary, fork and jump into that region - but even that is questionable, considering that there's CPU support to suppressexactlythat operation (R^X). Basically, what you need is a runtime linker, and shells do not (and also: should not) include something like that.Let's drop the Bash requirement (which really just sounds like you're trying to find an obvious hole in an application that is older than I am grumpy):Generally, requiring ELF (which is afileformat) and avoidingfilesat the same time is a tad complicated. GCC generates machine code. If you just want to execute known machine code, put it into some buffer, build a function pointer to that and call it. Simple as that. However, you'd obviously don't have all the nice relocation and dynamic linking that the process of executing an ELF binary or loading a shared object (dlopen) would have.If you want that, I'd look in the direction of things like LLVM – I know, for a fact, that there's people building "I compile C++ at runtime and execute it" with LLVM as executing instance, and clang as compiler. In the end, what yourgcc|somethingis is really just JIT – an old technology :)
For some obscure reason I have written a bash script which generates some source code, then compiles it, using... whatever ... | gcc -x c -o /dev/stdoutNow, I want to execute the result on the compilation. How can I make that happen? No use of files please.
How can I make bash execute an ELF binary from stdin?
There was a conflict in the jar files I used. I have kept both jedis 2.1.0 and jedis 2.0.0 by mistake in the build path.ShareFollowansweredApr 30, 2013 at 14:39user1182253user11822531,09922 gold badges1414 silver badges2626 bronze badgesAdd a comment|
Using Response Object in Jedis, throws ClassCastException . I am not able to get any value from Redis when I use pipeline. Please help. I am using Jedis 2.1.0public class JedisPipeline { public static void main(String args[]){ final JedisPool pool = new JedisPool(new JedisPoolConfig(), "127.0.0.1", 6379); Jedis jedis = pool.getResource(); Pipeline pipeline = jedis.pipelined(); pipeline.multi(); HashMap<String,String> map = new HashMap<String,String>(); map.put("50", "50"); pipeline.hmset("Id",map); Response <Long> incr = pipeline.hincrBy("Id", "100", 100); Response<Map<String,String>> map1 = pipeline.hgetAll("Id"); pipeline.exec(); List<Object> results = pipeline.syncAndReturnAll(); System.out.println(results); System.out.println( incr.get()); System.out.println( map1.get()); pool.returnResource(jedis); pool.destroy(); } } Exception in thread "main" java.lang.ClassCastException: [B cannot be cast to java.lang.Long at redis.clients.jedis.BuilderFactory$4.build(BuilderFactory.java:45) at redis.clients.jedis.BuilderFactory$4.build(BuilderFactory.java:48) at redis.clients.jedis.Response.get(Response.java:27) at redis.clients.jedis.Pipeline.syncAndReturnAll(Pipeline.java:42) at com.work.jedis.JedisPipeline.main(JedisPipeline.java:28)
Response Object in Jedis - throws ClassCastException
Looks like there is a problem sharing artifacts between pipelines as well as between projects. It is known bug and has been reported here:https://gitlab.com/gitlab-org/gitlab/-/issues/228586You can find a workaround there but since it needs to add access token to project it is not the best solution.ShareFollowansweredAug 26, 2021 at 16:58user2477643user24776434144 bronze badgesAdd a comment|
I am working on a multi-pipeline project, and usingtriggerkeyword to trigger a downstream pipeline, but I'm not able to pass artifacts created in the upstream project. I am usingneedsto get the artifact like so:Downstream Pipeline block to get artifacts:needs: - project: workspace/build job: build ref: master artifacts: trueUpstream Pipeline block to trigger:build: stage: build artifacts: paths: - ./policies expire_in: 2h only: - master script: - echo 'Test' allow_failure: false triggerUpstream: stage: deploy only: - master trigger: project: workspace/deployBut I am getting the following error:This job depends on other jobs with expired/erased artifacts:I'm not sure what's wrong.
Gitlab ci issue with passing artifacts to Downstream pipeline with trigger and needs keywords
When usingtf.datawith distribution strategy (which can be used with Keras andtf.Estimators), your input fn should return atf.data.Dataset:def input_fn(): dataset = dataset.repeat(num_epochs) dataset = dataset.batch(batch_size = batch_size) dataset = dataset.cache() dataset = dataset.prefetch(buffer_size=None) return dataset ...use input_fn...Seedocumentationon distribution strategy.dataset.make_one_shot_iterator()is useful outside of distribution strategies / higher level libraries, for example if you are using lower level libraries, or debugging / testing a dataset. For example, you can iterate all a dataset's elements like so:dataset = ... iterator = dataset.make_one_shot_iterator() get_next = iterator.get_next() with tf.Session() as sess: while True: print(sess.run(get_next)) except tf.errors.OutOfRangeError: breakShareFollowansweredFeb 4, 2019 at 17:56rlysrlys48022 silver badges99 bronze badgesAdd a comment|
I am building a data pipeline using Dataset API, but when I train of multiple GPUs and returndataset.make_one_shot_iterator().get_next()in my input function, I getValueError: dataset_fn() must return a tf.data.Dataset when using a tf.distribute.StrategyI can follow the error message and return dataset directly, but I do not understand the purpose ofiterator().get_next()and how it works for training on single vs multiple GPU.... dataset = dataset.repeat(num_epochs) dataset = dataset.batch(batch_size = batch_size) dataset = dataset.cache() dataset = dataset.prefetch(buffer_size=None) return dataset.make_one_shot_iterator().get_next() return _input_fn
Should I return dataset directly or should i use one_shot iterator instead?
Please check thePipeline. It has two type of definition:Pipeline scriptandPipeline script from SCM. If the definition is Pipeline script from SCM, there has repo URL and branches to build.And yes, it hasn’t SCM configuration tab, but you can find related things inBuild Triggerstab ofPoll SCMselection.ShareFolloweditedDec 6, 2016 at 9:37answeredDec 2, 2016 at 5:20Marina LiuMarina Liu37.5k55 gold badges6262 silver badges7474 bronze badges4Thanks! But I don't have either Git or SCM Configuration tab. Just Build triggers, and under Poll SCM only "Schedule" cron and "Ignore post-commit hooks" checkbox is available–WhimusicalDec 2, 2016 at 18:18No, I dont find it, maybe gerrit, scm, git plugins combined produce that effect–WhimusicalDec 5, 2016 at 11:14Did you check your pipeline? There has 2 type of definition. If it’s pipeline script from SCM, there has configuration for URL and branch. If it’s pipeline script, you need to check the code which maybe defined URL and branch.–Marina LiuDec 6, 2016 at 2:40Its script but the poll scm option is under build triggers, and it works regardless of the script–WhimusicalDec 6, 2016 at 21:30Add a comment|
I am configuring a legacy Jenkins which has installed the SCM Poll plugin with Git and Gerrit plugins. Due to something I cannot comprehend, even if I have no branch or specific project configured anywhere for my git server nor in my job, I can clearly see how my job is handling the correct repository (Git Polling Log works flawlessly) and other projects don't trigger it.Is that configured in a very hidden place? Does it work executing git on the workspace without knowing context?I cannot see any SCM configuration tab in my Jenkins or either Fast Remote Polling or other features Im seeing appointed to in Internet, but I am using pipeline scripts, so maybe that feature is disabled, or maybe my git/gerrit plugin is infering the context somehow.In the same section "Build Trigger", for instance, I have the Gerrit Event option disabled, and if I select the checkbox, then it asks me for a repository url and usual stuff
How does 'Poll SCM' Jenkins feature know the repository and branch over which polling?
There is no support for "nested Pipelines" yet. It may be part of 4.1.0. For now you need to remove/add handlers on the fly.See [1] for an example.[1]https://github.com/netty/netty/blob/master/example/src/main/java/io/netty/example/portunification/PortUnificationServerHandler.javaShareFollowansweredMar 20, 2013 at 7:44Norman MaurerNorman Maurer23.4k22 gold badges3434 silver badges3131 bronze badges2What about now? Still not supported?–St.AntarioJan 31, 2018 at 12:41So in order to implement some sort of multiplexing I need to add/remove handlers on the fly? Seems to introduce some latency... Maybe there is a better way?–St.AntarioJan 31, 2018 at 12:44Add a comment|
I'm pretty new to Netty, but how would one implement a case in Netty 4.x when several protocols (e.g. P1 and P2) are encapsulated inside another protocol?+-------------+ | decoder | +-------------+ | encoder | +-------------+ | muxer | +-------------+ | demuxer | +---+------+--+ | | | | +------+ +------+ | | | | v v +-------------+ +-------------+ | P1 decoder | | P2 decoder | +-------------+ +-------------+ | P1 encoder | | P2 encoder | +-------------+ +-------------+ | P1 handler | | P2 handler | +-------------+ +-------------+Is there a way to create nested pipelines, so thatdecoder<->encoder<->muxer<->demuxerbeing the main pipeline would send the data along P1 or P2 pipeline based on the decision of demuxer?Or maybe there is a way to somehow create (for the sake of clarity) "subchannels" with their own pipelines?
Netty nested pipelines / multiplexing
SimpleImputer in make_pipelinepreprocess_pipeline = make_pipeline( FeatureUnion(transformer_list=[ ('Handle numeric columns', make_pipeline( ColumnSelector(columns=['Amount']), SimpleImputer(strategy='constant', fill_value=0), StandardScaler() )), ('Handle categorical data', make_pipeline( ColumnSelector(columns=['Type', 'Name', 'Changes']), SimpleImputer(strategy='constant', missing_values=' ', fill_value='missing_value'), OneHotEncoder(sparse=False) )) ]) )SimpleImputer in Pipeline('features', FeatureUnion ([ ('Cat Columns', Pipeline([ ('Category Extractor', TypeSelector(np.number)), ('Impute Zero', SimpleImputer(strategy="constant", fill_value=0)) ])), ('Numerics', Pipeline([ ('Numeric Extractor', TypeSelector("category")), ('Impute Missing', SimpleImputer(strategy="constant", fill_value='missing')) ])) ]))ShareFolloweditedJun 14, 2019 at 8:17answeredJun 14, 2019 at 8:08hanzgshanzgs1,54611 gold badge2121 silver badges4747 bronze badgesAdd a comment|
I have a pandas dataframe that has some NaN values in a particular column:1291 NaN 1841 NaN 2049 NaN Name: some column, dtype: float64And I have made the following pipeline in order to deal with it:from sklearn.preprocessing import StandardScaler from sklearn.impute import SimpleImputer from sklearn.pipeline import Pipeline scaler = StandardScaler(with_mean = True) imputer = SimpleImputer(strategy = 'median') logistic = LogisticRegression() pipe = Pipeline([('imputer', imputer), ('scaler', scaler), ('logistic', logistic)])Now when I pass this pipeline to aRandomizedSearchCV, I get the following error:ValueError: Input contains NaN, infinity or a value too large for dtype('float64').It's actually quite a bit longer than that -- I can post the entire error in an edit if neccesary. Anyway, I am quite sure that this column is the only column that contains NaNs. Moreover, if I switch fromSimpleImputerto the (now deprecated)Imputerin the pipeline, the pipeline works just fine in myRandomizedSearchCV. I checked the documentation, but it seems thatSimpleImputeris supposed to behave in (nearly) the exact same way asImputer. What is the difference in behavior? How do use an imputer in my pipeline without using the deprecatedImputer?
Sklearn's SimpleImputer doesn't work in a pipeline?
I finally figured out one way, which is not very pretty. It is to create vector.dense for the features, and then create data frame out of this.import org.apache.spark.mllib.regression.LabeledPoint val myDataRDDLP = inputData.map {line => val indexed = line.split('\t').zipWithIndex val myValues = indexed.filter(x=> {x._2 >1770}).map(x=>x._1).map(_.toDouble) val mykey = indexed.filter(x=> {x._2 == 3}).map(x=>(x._1.toDouble-1)).mkString.toDouble LabeledPoint(mykey, Vectors.dense(myValues)) } val training = sqlContext.createDataFrame(myDataRDDLP).toDF("label", "features")ShareFollowansweredApr 11, 2016 at 23:40Ruxi ZhangRuxi Zhang32366 silver badges1212 bronze badgesAdd a comment|
I was using spark ML pipeline to set up classification models on really wide table. This means that I have to automatically generate all the code that deals with columns instead of literately typing each of them. I am pretty much a beginner on scala and spark. I was stuck at the VectorAssembler() part when I was trying to do something like following:val featureHeaders = featureHeader.collect.mkString(" ") //convert the header RDD into a string val featureArray = featureHeaders.split(",").toArray val quote = "\"" val featureSIArray = featureArray.map(x => (s"$quote$x$quote")) //count the element in headers val featureHeader_cnt = featureHeaders.split(",").toList.length // Fit on whole dataset to include all labels in index. import org.apache.spark.ml.feature.StringIndexer val labelIndexer = new StringIndexer(). setInputCol("target"). setOutputCol("indexedLabel") val featureAssembler = new VectorAssembler(). setInputCols(featureSIArray). setOutputCol("features") val convpipeline = new Pipeline(). setStages(Array(labelIndexer, featureAssembler)) val myFeatureTransfer = convpipeline.fit(df)Apparently it didn't work. I am not sure what should I do to make the whole thing more automatic or ML pipeline does not take that many columns at this moment(which I doubt)?
Spark ML VectorAssembler() dealing with thousands of columns in dataframe
You might look at Zach Tellman'sLaminalibrary. You can createpipelinesof functions with error handlers as other useful functionality.ShareFollowansweredFeb 5, 2014 at 5:42Alex MillerAlex Miller69.6k2525 gold badges123123 silver badges168168 bronze badgesAdd a comment|
I'm looking for a smart way to create composable validation and transformation pipelines in Clojure. The aim is to be able to do simple translation and validation of messages using composable steps.Main requirements:Can becomposed functionally, i.e. pipelines are pure functionsCan be applied toregular Clojure data types(maps, vectors, lists, and nested combinations thereof)Can performtransformations, e.g. renaming a key in mapCan perform arbitraryvalidations(e.g. applying a Schema validation to part of a message)Canbail out gracefullywhen errors are detected, and return a meaningful error message (notjust throwing an exception!)I guess I can write all this, but don't particularly feel like reinventing the wheel today :-)Dopes anyone know of a tool that can do this, or have a good idea regarding how to construct one in a clever and general way?
Pipelines with error handling in Clojure
There are two immediate thoughts: (1) is Integrated Windows enabled on the server as a feature in the role (2) is the authentication configured in the right part of web.config? IIS7 stores some of its configuration is web.config, and moving from IIS6 to IIS7 often involves adding extra information.See also:http://forums.iis.net/t/1153827.aspxShareFollowansweredNov 24, 2009 at 16:36MikeBaz - MSFTMikeBaz - MSFT3,09844 gold badges2929 silver badges5757 bronze badgesAdd a comment|
Closed.This question isoff-topic. It is not currently accepting answers.Want to improve this question?Update the questionso it'son-topicfor Stack Overflow.Closed11 years ago.Improve this questionWe have several web apps that use Windows Authentication that worked fine on IIS6. After deploying them to IIS7, Windows Authentication no longer works (we get 401.2 errors) UNLESS we set the web app to use the "Classic Pipeline"I realize that Forms auth and Windows Auth aren't simultaneously supported as mentionedhereandhere- but that is not my issue - I dont have Forms Authentication enabled. I only have Windows Authentication enabled - but I am always getting the 401.2Has anyone run into this? Is there something else I need to do?Thanks! -Mike
IIS 7 - Windows Authentication not working [closed]
In general this can be expected to break. The processes in a pipeline are all started up in parallel, so the> junkat the end of the line will usually truncate your input file before the process at the head of the pipelining has finished (or even started) reading from it.Even if bash under Cygwin let's you get away with this you shouldn't rely on it. The general solution is to redirect to a temporary file and then rename it when the pipeline is complete.ShareFollowansweredMay 14, 2012 at 15:48larskslarsks293k4141 gold badges425425 silver badges430430 bronze badgesAdd a comment|
In cygwin, the following code works fine$ cat junk bat bat bat $ cat junk | sort -k1,1 |tr 'b' 'z' > junk $ cat junk zat zat zatBut in the linux shell(GNU/Linux), it seems that overwriting doesn't work[41] othershell: cat junk cat cat cat [42] othershell: cat junk |sort -k1,1 |tr 'c' 'z' zat zat zat [43] othershell: cat junk |sort -k1,1 |tr 'c' 'z' > junk [44] othershell: cat junkBoth environments run BASH.I am asking this because sometimes after I do text manipulation, because of this caveat, I am forced to make the tmp file. But I know in Perl, you can give "i" flag to overwrite the original file after some operations/manipulations. I just want to ask if there is any foolproof method in unix pipeline to overwrite the file that I am not aware of.
Why piping to the same file doesn't work on some platforms?
We've been using TFS for the last 18 months, and like many products the first version left a bit to be desired (one of the favourites of TFS 2005 was not to get latest when it said it had, resulting in many build breaks).However now we're on TFS 2008 SP1 it works exceptionally well. The source control system is fast and intuitive, and integrates seamlessly with Visual Studio. For things like renaming, moving, branching and merging it easily surpasses other tools such as Subversion in terms of how well it tracks things and its ability to merge branches.In spite of what anybody says, there simply is no comparison between TFS source control and VSS. And you don't have to worry about your repository getting corrupted either!The only problem that still seems apparent is that every couple of weeks TFS slows down and getting latest takes ages, requiring a restart of the SQL Server to fix. I don't know why this is.ShareFollowansweredOct 24, 2008 at 1:14Greg BeechGreg Beech135k4343 gold badges207207 silver badges250250 bronze badgesAdd a comment|
My workplace is planning on moving to Team Foundation Server and it's not a moment too soon - anything to get away from the cancer that is Visual SourceSafe.However, I must ask - is the source control in TFS significantly different (and better) than VSS or is it just a "beefed up" version of the same thing?I ask this now since this is probably my last window to suggest something like Subversion.
Is TFS's source control just a beefed up VSS or is it significantly different?
You're passing the name as input, mkdir expects an argumentTry:echo NAME | xargs mkdirxargs here provides exactly the missing link: it takes the input stream, and passes it to the program (mkdir, in this case) asarguments. Note that this parses whitespace-separated elements as different args, so use this with care.For more information just look atman xargsShareFolloweditedMay 30, 2017 at 11:16answeredMay 30, 2017 at 6:09LeeorLeeor19.5k55 gold badges5959 silver badges8888 bronze badges21Oh cool, it's working. Could you explain me why and what is xargs?–FlorinMay 30, 2017 at 9:21@FineasSilaghi, edited my answer. I think xargs is one of the most useful utility-that-most-people-never-heard-of in linux–LeeorMay 30, 2017 at 11:18Add a comment|
I know thatechocommand prints all it's arguments and it does not reads from stdin.But when I am trying to makeecho NAME | mkdirit tells me:mkdir: missing operand.I tried to read fromman mkdir, but it does not tells me where mkdir reads from.
Trying to pass the output of echo into mkdir command
This has extra spaces, which are not valid:terraform plan -out = plan.tfplanIt should be like the following:terraform plan -out=plan.tfplanShareFollowansweredMar 23, 2022 at 13:44Mark BMark B191k2525 gold badges310310 silver badges307307 bronze badgesAdd a comment|
I am new to terraform. I am trying to create a simple storage account through azure pipeline, however when I run my pipeline I get the error "Too many command line arguments". I am struck and I do not know what I am doing wrong. Can someone please help.this is my plan script in pipeline:- script: terraform plan -out = plan.tfplan displayName: Terraform plan workingDirectory: $(System.DefaultWorkingDirectory)/terraform env: ARM_CLIENT_ID: $(application_id) ARM_CLIENT_SECRET: $(client_secret) ARM_TENANT_ID: $(tenant_id) ARM_SUBSCRIPTION_ID: $(subscription_id) TF_VAR_client_id: $(application_id) TF_VAR_tenant_id: $(tenant_id) TF_VAR_subscription_id: $(subscription_id) TF_VAR_client_secret: $(client_secretThe error that I am getting:Starting: Terraform plan Generating script. Script contents: terraform plan -out = plan.tfplan ========================== Starting Command Output =========================== /usr/bin/bash --noprofile --norc /home/vsts/work/_temp/3d07140f-ec17-4bfc-9384-a1170fae1248.sh ╷ │ Error: Too many command line arguments │ │ To specify a working directory for the plan, use the global -chdir flag. ╵ For more help on using this command, run: terraform plan -help ##[error]Bash exited with code '1'. Finishing: Terraform plan
Too many command line arguments Terraform plan
You need first to install .NET 6 SDK in the agent, add it before theDotNetCoreCLI:- task: UseDotNet@2 displayName: 'Install .NET Core sdk 6.x' inputs: version: 6.xShareFollowansweredNov 21, 2021 at 11:15Shayki AbramczykShayki Abramczyk39k1717 gold badges9797 silver badges121121 bronze badgesAdd a comment|
I have updated my website to .net 6. It also works locally. However, my yaml pipeline in Azure DevOps is no longer running. There is an error in the publishing step for all .csproj files in solution like this. I don't know, how I can configure that it should use .net 6.C:\Program Files\dotnet\sdk\5.0.403\Sdks\Microsoft.NET.Sdk\targets\Microsoft.NET.TargetFrameworkInference.targets(141,5): error NETSDK1045: The current .NET SDK does not support targeting .NET 6.0. Either target .NET 5.0 or lower, or use a version of the .NET SDK that supports .NET 6.0. [D:\a\1\s\04_Contracts\Contracts\Contracts.csproj] ##[error]Error: The process 'C:\Program Files\dotnet\dotnet.exe' failed with exit code 1Here is the Pipeline step for publishing.- task: DotNetCoreCLI@2 displayName: 'dotnet publish' inputs: command: publish publishWebProjects: false projects: '**/*.csproj' arguments: '-r linux-x64 --output $(build.artifactstagingdirectory)'
Pipeline not work after updating to .net 6
Only the features get scaled. The confusion is understandable looking at the documentation. To convince yourself, run the pipeline with just the scaler and look at the output.ShareFollowansweredMar 21, 2019 at 1:55JimmyOnThePageJimmyOnThePage96688 silver badges1818 bronze badgesAdd a comment|
Given that my pipeline ispipe = Pipeline([('scaler', StandardScaler()), ('regressor', LinearRegression())])And then I callpipe.fit(X_train, y_train), does the pipeline apply the scaler to both features and target or only the features?If not, what purpose does theyargument serve in thefit_transformmethod of the StandardScaler? The documentation is really confusing.Forfitit says thatyis ignored. Forfit_transformit says "Fits transformer to Xandy". Fortransformit saysyis deprecated.I tried going through the source code on github, but quickly got lost jumping through chains of functions.
Does scikit learn Pipeline apply StandardScaler to y?
try this:Get-VM xxxxx | Get-HardDisk | Select Parent, Name, Filename, DiskType, Persistence | Out-File -FilePath C:\FilepathShareFolloweditedMar 12, 2019 at 15:06mklement0408k6767 gold badges643643 silver badges831831 bronze badgesansweredMar 11, 2019 at 12:40Guenther SchmitzGuenther Schmitz1,98911 gold badge1212 silver badges2323 bronze badges0Add a comment|
I'm new to scripting and I am trying to write the information returned about a VM to a text file. My script looks like this:Connect-VIServer -Server 192.168.255.255 -Protocol https -User xxxx -Password XXXXXX Get-VM -Name xxxxxx Get-VM xxxxx | Get-HardDisk | Select Parent, Name, Filename, DiskType, Persistence | FT -AutoSize Out-File -FilePath C:FilepathI am able to connect to the VM, retrieve the HDD info and see it in the console. The file is created where I want it and is correctly named. No data is ever put into the file. I have tried Tee-Object with the same results. I've also tried the -append switch. I did see a post about the data being returned as an array and Powershell is not able to move the data from an array to a string. Do I need to create a variable to hold the returned data and write to file from there?Thanks
Writing console output to a file - file is unexpectedly empty
scoremethod is alwaysaccuracyfor classification andr2score for regression. There is no parameter to change that. It comes from theClassifiermixinandRegressorMixin.Instead, when we need other scoring options, we have to import it fromsklearn.metricslike the following.from sklearn.metrics import balanced_accuracy y_pred = pipeline.predict(self.X[test]) balanced_accuracy(self.y_test, y_pred)ShareFolloweditedSep 7, 2022 at 0:27CommunityBot111 silver badgeansweredApr 29, 2020 at 8:07VenkatachalamVenkatachalam16.6k99 gold badges5050 silver badges7777 bronze badges1Nowbalanced_accuracy_score–François B.Dec 10, 2023 at 14:28Add a comment|
I can do this:model=linear_model.LogisticRegression(solver='lbfgs',max_iter=10000) kfold = model_selection.KFold(n_splits=number_splits,shuffle=True, random_state=random_state) scalar = StandardScaler() pipeline = Pipeline([('transformer', scalar), ('estimator', model)]) results = model_selection.cross_validate(pipeline, X, y, cv=kfold, scoring=score_list,return_train_score=True)where score_list can be something like['accuracy','balanced_accuracy','precision','recall','f1'].I also can do this:kfold = model_selection.KFold(n_splits=number_splits,shuffle=True, random_state=random_state) scalar = StandardScaler() pipeline = Pipeline([('transformer', scalar), ('estimator', model)]) for i, (train, test) in enumerate(kfold.split(X, y)): pipeline.fit(self.X[train], self.y[train]) pipeline.score(self.X[test], self.y[test])However, I am not able to change the score type for pipeline in the last line. How can I do that?
Sklearn: Is there a way to define a specific score type to pipeline?
The solution here is to use a bash script under the script task. For example the workaround looks something like this:- script: | cd server && npm run install npm run install mocha-junit-reporter npm run unit_tests displayName: 'npm install and build'ShareFolloweditedNov 6, 2019 at 7:37Vadim Kotov8,20488 gold badges4949 silver badges6363 bronze badgesansweredJul 18, 2019 at 20:03KalahariKalahari11322 silver badges77 bronze badgesAdd a comment|
I have a monorepo with a folder structure like this:root->packageA->packageB->packageCHow can I alter the azure-pipelines.yml to build packageAI have tried altering the azure-pipelines.yml by specifying the path to packageA. However, I am a newbie to ci/cd so I am not sure how to solve my problem. Currently I have this as my azure-pipelines.yml file:# Node.js # Build a general Node.js project with npm. # Add steps that analyze code, save build artifacts, deploy, and more: # https://learn.microsoft.com/azure/devops/pipelines/languages/javascript trigger: branches: include: - master pool: vmImage: 'ubuntu-latest' steps: - task: NodeTool@0 inputs: versionSpec: '10.x' displayName: 'Install Node.js' - script: | npm install npm run unit_tests displayName: 'npm install and build'The .yml file is in the root folder of the monorepo. The pipeline build will fail because it cannot find the package.json to run the npm commands in packageA
How to specify path to build a package for azure-pipelines.yml in a Mono-Repo?