Response
stringlengths
8
2k
Instruction
stringlengths
18
2k
Prompt
stringlengths
14
160
If I undersand you correctly, you can manually execute the task usingRake::Task.task "foo" do puts "Doing something in foo" end task "bar" => "foo" do puts "Doing something in bar" Rake::Task["foo"].execute endWhen you runrake bar, you'll see:Doing something in foo Doing something in bar Doing something in fooIf you useRake::Task, it will be executed without checking any pre-requisites. Let me know if this doesn't help you.
is there any way to force the execution of task in Rake, even if the prerequisites are already met?I am looking for the equivalent of the --always-make option for GNU/make (http://www.gnu.org/software/make/manual/make.html#Options-Summary)Example Rakefile:file "myfile.txt" do system "touch myfile.txt" puts "myfile.txt created" endHow would the --always-make option work:# executing the rule for the first time creates a file: $: rake myfile.txt myfile.txt created # executing the rule a second time returns no output # because myfile.txt already exists and is up to date $: rake myfile.txt # if the --always-make option is on, # the file is remade even if the prerequisites are met $: rake myfile.txt --always-make myfile.txt createdI am running Rake version 0.9.2.2, but I can't find any option in the --help and man pages.
How to force the execution of a task in Rake, even if prereqs are met?
There's no official way at the moment. You could probably prepend a task to the MapReduce pipeline to compute and cache the list (in the datastore or blobstore, whichever is most appropriate, plus a copy in memcache). Then have your mapper and/or reducer function do a lazy initialization of a global variable that holds the list, checking first in memcache, and falling back on datastore/blobstore as necessary (and re-caching the list). As new instances are spun up to handle tasks, they'll initialize themselves.Assuming the list is fixed at the time the MapReduce starts, competing reads from different instances won't be an issue.
I've begun creating a MapReduce job with the new Google App Engine Pipeline API, and I've run into a situation where I'd like every worker to have a copy of the same list during runtime.One option would be to use memcache, but I'm worried that the size of this list might eventually be greater than what I can set with memcache. I think my other option would be to initialize every worker with this list context at runtime, but I can't find any way to do this in the docs and looking at the source code hasn't offered any obvious answers.Is there a way to add extra parameters into a map reduce function or otherwise inject state into a MapReduce worker context?
Can I keep state across GAE Pipeline API workers?
Is this possible to force/select the output type of decodebin2?No you can not force or select the type of its source pad.decoderbin2 select appropriate demuxer element and that demuxer element parse that media file and depending upon the codec of elementry stream in that media file it creates the caps/type of the output/Source pad.so that type of decoder can link with it and pipeline works.this all happen in gstremer-plugin codeso if you want such then you need to write plugin by your self..!!
I'm trying to write a program in C which replicates the pipeline:gst-launch -v filesrc location="bbb.mp4" ! decodebin2 ! ffmpegcolorspace ! autovideosinkDecodeBin2 has a dynamic pad and I've attached a callback to handle its creation. I am unable to link it to ffmpegcolorspace however because the pad capability is always video/quicktime. I would like it to be video/x-raw-yuv or something else which is compatible with ffmpegcolorspace.Is this possible to force/select the output type of decodebin2?Thanks.EDIT:Please do not recommend playbin. I'm trying to learn how how to make pipelines.
GStreamer force decodebin2 output type
doesn't plink allow you to specify the user and host together in one argument? that is:plink -ssh user@hostif so your ssh function could be whittled down to:function ssh { param($usernameAndServer) C:\plink.exe -ssh $usernameAndServer }
I'm trying to make a simple PowerShell function to have a Linux-style ssh command. Such as:ssh username@urlI'm using plink to do this, and this is the function I have written:function ssh { param($usernameAndServer) $myArray = $usernameAndServer.Split("@") $myArray[0] | C:\plink.exe -ssh $myArray[1] }If entered correctly by the user, $myArray[0] is the username and $myArray[1] is the URL. Thus, it connects to the URL and when you're prompted for a username, the username is streamed in using the pipeline. Everything works perfectly, except the pipeline keeps feeding the username ($myArray[0]) and it is entered as the password over and over. Example:PS C:\Users\Mike> ssh xxxxx@yyyyy login as: xxxxx@yyyyy's password: Access denied xxxxx@yyyyy's password: Access denied xxxxx@yyyyy's password: Access denied xxxxx@yyyyy's password: Access denied xxxxx@yyyy's password: Access denied xxxxx@yyyyy's password: FATAL ERROR: Server sent disconnect message type 2 (protocol error): "Too many authentication failures for xxxxx"Where the username has been substituted with xxxxx and the URL has been substituted with yyyyy.Basically, I need to find out how to stop the script from piping in the username ($myArray[0]) after it has been entered once.Any ideas? I've looked all over the internet for a solution and haven't found anything.
Pipelining String in Powershell
Most UNIX utilities pay attention toSIGPIPE, which, when received, indicates that a piped stream to which the process is writing has nothing listening on the other end, so there's no need to keep working.Searching for something like "java posix signals" yields some libraries that might allow you to add signal handlers to your Java code.
when I'm running the following pipeline:cat my_large_file.txt | head | wcthe process stops almost immediately. OK.but when I run my java programjava MyProgramReadALargeFile my_large_file.txt | head | wcthe output from 'wc' is printed tostdoutbut the java program isstill running. How can it detects that the pipeline was closed ?Thanks,
Stopping a program if the next process in the pipeline was stopped
A very simple solution, that is also close to what actually happens in ASP.NET:class EventChain { public event EventHandler Phase1Completed; public event EventHandler Phase2Completed; public event EventHandler Phase3Completed; protected void OnPhase1Complete() { if (Phase1Completed != null) { Phase1Completed(this, EventArgs.Empty); } } public void Process() { // Do Phase 1 ... OnPhase1Complete(); // Do Phase 2 ... OnPhase2Complete(); } }
In ASP.NET Web Apps , events are fired in particluar order :for simplicityLoad => validation =>postback =>renderingSuppose I want to develop such pipeline -styled eventExample :Event 1 [ "Audiance are gathering" ,Guys{ Event 2 and Event 3 Please wait until i signal }]after Event 1 finished it taskEvent 2 [ { Event 2, Event 3 "Audiance gathered! My task is over } ]Event 2 is taking over the control to perform its taskEvent 2 [ " Audiance are Logging in " Event 3 please wait until i signal ]after Event 2 finished it task.....Event 3 [ "Presentation By Jon skeet is Over :) "]With very basic example can anybody explain ,how can i design this ?
C# -Pipeline Style event model
The best approach would be to use apost sync hook.Why is it necessary to run the tests in a GitHub actions workflow? Since the application is already deployed to the cluster, wouldn't it make more sense to run the tests directly on the cluster, instead of going through the trouble of communicating with Github?
I'm working in a company that uses Github Actions and Argocd.(using argocd helm chart). Needless to say that the Github repo is private and argocd is in an internal network that used by the company only.The flow of what we want to do is that when we deploy the app and the deployment succeeded - Trigger another workflow that will run tests on the deployed environment. Basically, the deployment will be the trigger for another workflow.I have been trying to configure webhook from argocd to github but with no success. What is the best approach to this situation, will be happy to provide more context if needed.Edit: The test workflow i'm trying to use workflow_dispatch.name: workflow_02 on: push: branches: [ argo-github-trigger ] workflow_dispatch: jobs: log-the-inputs: runs-on: ubuntu-latest steps: - run: | echo "Worked"I'm expecting to see a "Run workflow" button on github but it doesn't appear. On another repo, that I have Admin priviliges and can work on the main branch, I tried the same workflow and it worked.
Github Action - How can I trigger a workflow when argocd deployment is finished?
Background: Following best practices, in general, of using feature names (e.g., column names of a dataframe pandas), these should be without spaces between them.Base caseTobypass your problem, you can use a string as a parameter where each element is a single feature.features = "feature_0 feature_1 feature_2"and then, use it normally withParameterString.If it cannot be that way, I recommend inserting a specific separation pattern between names instead of space and splitting the whole string into features list later.At this point, in the training script you pass the parameter to the ArgumentParser which you can configure to have the space-separated word string reprocessed into a list of individual words.import argparse if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument( "--features", nargs="*", type=str, default=[] ) args, _ = parser.parse_known_args()Extra caseShould the string mistakenly be interpreted as a list directly when passing the argument to a pipeline component (e.g., to a preprocessor), the latter can be reworked with an input reinterpretation function.import itertools def decode_list_of_strings_input(str_input: str) -> []: str_input = [s.split() for s in str_input] return list(itertools.chain.from_iterable(str_input))Here is an example of the use of this code:features = ['a b c'] features = decode_list_of_strings_input(features) print(features) >>> ['a', 'b', 'c']
The Sagemaker Pipeline only has Parameter classes for single values (a string, a float, etc), but how can I deal with a parameter that is best represented by a list (e.g. the list of features to select for training from a file with many features)?
Sagemaker Pipelines | Pass list of strings as parameter
I had the same issue and got a workaround by using the ENV object from theNX configuration:This way, I added the tags in theproject.jsonconfiguration file, in my case for running the smoke test and a regression test based on the tags filtering:"smoke": { "executor": "@nrwl/cypress:cypress", "options": { "cypressConfig": "apps/explore-e2e/cypress.json", "baseUrl": "<BASE_URL>", "env": { "TAGS": "@smoke" } }, "configurations": { "staging": { "baseUrl": "<STG_URL>" }, "production": { "baseUrl": "<PROD_URL>" } } }, "regression": { "executor": "@nrwl/cypress:cypress", "options": { "cypressConfig": "apps/explore-e2e/cypress.json", "baseUrl": "<BASE_URL>", "env": { "TAGS": "@regression" } }, "configurations": { "staging": { "baseUrl": "<STG_URL>" }, "production": { "baseUrl": "<PROD_URL>" } } }With this, you can now start tagging your scenarios and running it with:nx e2e myProject-e2e:smoke --TAGS=@smoke(In my case I'm using:yarn nx runinstead)
I am working in a nrwl nx workspace,, I have a cypress BDD cucumber project set up in it. I need to run cypress tests based on tags using nrwl.Normally i would use cypress-tags command to do the same: eg:"cypress run --env TAGS='@smoke' --browser chrome"I applied the same logic to an nx command. eg:nx e2e myProject-e2e --tags=@regBut the nx project is identifying all test cases in cypress, it does not take into consideration the test cases tagged with tag "@reg"Can someone guide me if there is a provision in nrwl to run cypress tests based on tags
How to run cypress tests with tags in nrwl nx workspace
Since you want (potentially) to use a different subset of features for each output, you should just put theSelectKBestin a pipeline with theLogisticRegressioninsidetheMultiOutputClassifier.clf = Pipeline([ ("feature_selection", SelectKBest(score_func=f_regression, k=9)), ("logistic regression", LogisticRegression(penalty="l2", C=2)), ]) estimator = MultiOutputClassifier(clf) pipeline = Pipeline([ ("first transformer", ct), ("second transformer", OHE), ('standard_scaler', MinMaxScaler()), ("select_and_model", estimator), ])
I have a nice pipeline that does the following:pipeline = Pipeline([ ("first transformer", ct), ("second transformer", OHE), ('standard_scaler', MinMaxScaler()), ("logistic regression", estimator) ])The estimator part is this:estimator = MultiOutputClassifier( estimator = LogisticRegression(penalty="l2", C=2) )Label DataFrame is of shape (1000, 2) and all works nicely so far.To tweak the model I now try to add SelectKBest to limit the features used for calculations. Unfortunately adding this code to the pipeline:('feature_selection', SelectKBest(score_func=f_regression, k=9))returns this error:ValueError: y should be a 1d array, got an array of shape (20030, 2) instead.I understand where it comes from and using only one label (1000, 1) solves the issue but that means I would need to create two separate pipelines for each label.Is there any way of including feature selection in this pipeline without resorting to that?
Using different features for the same estimator in the pipeline
It seems like the last step of the pipeline is not cached. Here is a slightly modified version of your script.from sklearn.base import BaseEstimator, TransformerMixin from sklearn.pipeline import Pipeline import time class Test(BaseEstimator, TransformerMixin): def __init__(self, col): self.col = col def fit(self, X, y=None): print(self.col) return self def transform(self, X, y=None): for t in range(5): # just to slow it down / check caching. print(".") time.sleep(1) #print(self.col) return X pipline = Pipeline( [ ("test", Test(col="this_column")), ("test2", Test(col="that_column")) ], memory="tmp/cache", ) pipline.fit(None) pipline.fit(None) pipline.fit(None) #this_column #. #. #. #. #. #that_column #that_column #that_column
I've seen the following:Using scikit Pipeline for testing models but preprocessing data only once, but this isn't working. I'm usingscikit-learn 1.0.2.Example:from sklearn.base import BaseEstimator, TransformerMixin from sklearn.pipeline import Pipeline from tempfile import mkdtemp from joblib import Memory import time from shutil import rmtree class Test(BaseEstimator, TransformerMixin): def __init__(self, col): self.col = col def fit(self, X, y=None): return self def transform(self, X, y=None): for t in range(5): # just to slow it down / check caching. print(".") time.sleep(1) print(self.col) cachedir = mkdtemp() memory = Memory(location=cachedir, verbose=10) pipline = Pipeline( [ ("test", Test(col="this_column")), ], memory=memory, ) pipline.fit_transform(None)Which will display:. . . . . this_columnWhen calling it a second time I'mexpectingit to be cached, and therefore not have to display the five.\n.\n.\n.\n.output prior tothis_column.This isn't happening though, it gives me the output from the for loop withtime.sleep.Why is this happening?
How to implement caching with sklearn pipeline
We solved it!The problem was in default gitlab runner, which is applied to all gitlab projects. So, we have 2 runners: the default one and macbook's runner. Sometimes, gitlab runs our build on non configured default runner and it fails.We removed default runner from our gitlab's project and all is working as expected!
We have a macbook with Runner for gitlab CI on it. Sometimes, pipeline fails with error "flutter: command not found". Sometimes it works correctly and all unit and integration tests passes.What can be the reason of such behaviour?gitlab-ci.yml file is:before_script: - flutter channel stable - flutter upgrade - flutter pub get stages: - test_unit - test_integration test_unit: stage: test_unit script: - flutter test - cd android - cp ~/builds/QKu8Lg6_/0/mobile/local.properties ~/builds/QKu8Lg6_/0/mobile/app/android - ./gradlew app:connectedAndroidTest only: - merge_requests except: - schedules retry: 2 test_integration: stage: test_integration script: - flutter drive --target=test_driver/app/app.dart - flutter drive --target=test_driver/app/app.dart -d iPhone Xʀ - flutter drive --target=test_driver/skill/time/time.dart - flutter drive --target=test_driver/skill/time/time.dart -d iPhone Xʀ only: - schedules retry: 2
Sometimes get error "flutter: command not found" during pipeline run in gitlab
inSend-MailMessage,-toshould not accept pipeline objectsIn principle it does, namely if the pipeline objects have a.Toproperty (which is not the case for you).However, with your current approach, you don't need pipeline input at all, given that you're supplying all input asarguments.Additionally, your pipeline input is incorrect, because$_.UIDsends$nullthrough the pipeline, given that$_- a group-info object output byGroup-Object- doesn't have a.UIDproperty.Usingdelay-bind script blocks({ ... }), you can simplify your command as follows, obviating the need for aForEach-Objectcall:Import-csv C:\path\info.csv | Group-Object UID | Send-MailMessage -From "<[email protected]>" -To { "<$($_.Name)@Email.com>" } ` -Attachments { "C:\path\$($_.Name).csv" } ` -Subject "Testing" -Body "Please Ignore This" -Priority High ` -SmtpServer smtp.server.comIn short, the script blocks passed to-ToandAttachmentsare evaluated for each input object, and their output determines the parameter value in each iteration. In the script block,$_represents the pipeline object at hand, as usual.Note that such delay-bind script blocks can only be used with parameters that aredesigned to accept pipeline input(irrespective of whether by value (whole object) or by a specific property's value).
First ever Powershell script so any advice or recommendations are appreciated. I'm parsing a .csv into smaller .csv's to send out information about servers to recipients and i'm running into a problem in my foreach. How do I get this to work?One interesting thing is that in Send-MailMessage, -to should not accept pipeline objects, It still throws an error, but it still sends the emails. However the attachment will never send.#had to set this as a variable because @ was throwing splatting errors $Mail = "@Email.com" #Import csv and split information, exports UID.csv Import-csv C:\path\info.csv | Group-Object UID | ForEach-Object { $_.Group | Export-csv "C:\path\$($_.Name).csv" -NoTypeInformation } #Import file again to get unique list of UID and send mail with #respective UID.csv Import-csv C:\path\info.csv | Group-Object UID | ForEach-Object { $_.UID | Send-MailMessage -From "<[email protected]>" -To "<$($_.Name)$Mail>" ` -Attachments "C:\path\$($_.Name).csv" ` -Subject "Testing" -Body "Please Ignore This" -Priority High ` -SmtpServer smtp.server.com }
Using pipeline object to populate mail -to and -attachment
You are using the wrong geom: geom_bar needs a count statistic. Use geom_col insteadyour summarised data %>% ggplot() +geom_col(...)
Can someone tell me how I integrate the following dplyr code with a ggplot bar chart. I have them both working indipendently but whenever I try to put it together something goes wrong.Here is the dplyr code which produces a summary table (inserted below script):pop.2.df %>% gather(key = "trial", value = 'capture','sco.1','sco.2', 'sco.3') %>% group_by(trial, capture) %>% summarise(n = n()) A tibble: 6 x 3 Groups: trial [?] trial capture n <chr> <chr> <int> 1 sco.1 n 28 2 sco.1 y 94 3 sco.2 n 38 4 sco.2 y 84 5 sco.3 n 45 6 sco.3 y 77Here is the ggplot code which produces the barchart, however I can only get it to work if I first make an object from the script above then plot. I would like to have it all working within one piece of code using pipeline operator:ggplot(data = pop.2.df.v2) + geom_bar(mapping = aes(x = trial, fill = capture), position = "dodge")
dplyr/ggplot using pipeline
you can use the jsonPathDefinition as similar as this: "column_full": "$. "Refer to this link on how to use jsonFormat with ADF:https://learn.microsoft.com/en-us/azure/data-factory/supported-file-formats-and-compression-codecs#json-format
I am using Azure Data Factory V1. We want to copy the json data stored as documents from Azure cosmos to a azure sql table, using a copy activity.I figured out copying the data by specifying the columns in sql table to match the property names from json. However our goal is to copy the entire json data as a single field. We are doing this for the purpose of being agnostic to the schema within the json data.I have tried specifying a single nvarchar(max) column to store the json data, and the query on the copy activity to be "select c as "FullData" from c". But the copy activity simply generates a NULL.I think this is because "FullData" is of type json on the document end and it is string on the sql end. I also tried to convert the json object to string within the cosmos db query. But I couldnt find any API to do so.I know we could write a custom activity to accomplish what I want to do, but is this possible to do with ADF out of the box functionality?
Copy json data from Azure cosmos db to Azure sql using Azure Data Factory
AFAIK, there is no pipeline-like mechanism in MongoDB.I would try usingserver-side scripts.
I am debugging a slow API endpoint in a web app which uses mongodb as storage. It turns out the request send 8 different queries to MongoDB, and group the data together to return. The MongoDB lives on another host, so the request involves 8 roundtrips.These 8 requests don't have any dependency among themselves, so if I can send the 8 queries in a batch, or in parallel, a lot of time can be saved.I am wondering if Mongo supports something like Redis's pipeline, or maybe send a script (like a lua script in Redis) for fetching data, so that I can get all data in one go?If not, is there a way to send the querys in parallel? (The app is based on python/tornado/pymongo)
Is there an equivalent of redis command pipeline in MongoDB?
At present support for pipeline type projects in delivery-pipeline-plugin is in active development.Refer the JIRA ticket for information & progressJENKINS-34040
I try to show some delivery pipeline instances in jenkins Delivery Pipeline View.If the delivery pipeline instance is defined as ‘Free Style’ or ‘MultiJob Project’ everything works fine, but the Job does not appear in the Delivery Pipeline View when defined as ‘Pipeline’.I tried the following: my_pipeline-job as a Post-Build-action -> Build other projects (manual step) ->Downstream Project Names->my_pipeline_job The result was a error message: my_pipeline_job cannot be build!The message disappears when I tried to build it as: my_pipeline-job as Post-Build-action ->Trigger parameterized build on other projects-> Build Triggers-> Projects to build->my_pipeline_job But the results will not be shown in Delivery Pipeline View.
Jenkins Delivery Pipeline View doesn't show pipeline jobs
I have a similar setup. I define a Checkout job, whose job it is to re-extract the source, explicitly passing a password in the clone URL. Once that's done, the submodule update works fine. This is the script:#!/bin/bash git clone --recursive https://myname:[email protected]/git/myname/my-project cd my-project git submodule update --remotePASSWORDis defined as a Secure property on the Environment Properties tab. It's a bit clunky and non-DRY, but it enabled the behaviour I wanted.I use the Checkout job as an input to the Build job (I probably could have done it as one big job, but I wanted to be able to visually distinguish failures in checkout and build.)
I would like to know how to build a project with private git submodules using IBM Bluemix Dev Ops Services.In my pipelines, I have a 'Build' job with the type 'Shell Script':#!/bin/bash git submodule init git submodule update --recursiveBut my submodules include a number of private repositories, and I get:Host key verification failed. fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists.In my local machine, I am able to run those commands because I have access and I am using my key. What can I do to make it work here? I do not wish to commit my private key into git.The repo for the app I am deploying is hosted on GitHub. And the private submodules are hosted on BitBucket.UpdateI tried to use my private key in the build console, but it did not work:echo "... my private key ..." >> ~/.ssh/throwaway_key chmod 400 ~/.ssh/throwaway_key ssh-agent bash -c 'ssh-add ~/.ssh/throwaway_key; git submodule update --recursive'Is it not working because I am inside a docker container? Do I have to update/etc/ssh/ssh_config? I don't have access to this inside the container that this job runs in.Update 2I also tried without success:echo "Host bitbucket.org Hostname bitbucket.org IdentityFile ~/.ssh/throwaway_key IdentitiesOnly yes" >> ~/.ssh/config
Bluemix Dev Ops: Building a project with private git submodules
Yes, it is possible. According to AWS support."You can install Task Runner on computational resources that you manage, such as an Amazon EC2 instance, or a physical server or workstation. Task Runner can be installed anywhere, on any compatible hardware or operating system, provided that it can communicate with the AWS Data Pipeline web service.This approach can be useful when, for example, you want to use AWS Data Pipeline to process data that is stored inside your organization’s firewall. By installing Task Runner on a server in the local network, you can access the local database securely and then poll AWS Data Pipeline for the next task to run. When AWS Data Pipeline ends processing or deletes the pipeline, the Task Runner instance remains running on your computational resource until you manually shut it down. The Task Runner logs persist after pipeline execution is complete."I did this myself as it takes a while to get the pipeline to start up, this start up time could be 10-15 minutes depending on unknown factors.http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-how-task-runner-user-managed.html
Can we use existing ec2 instance details while configuring data pipeline? If it is possible then what are the ec2 details that we need to provide while creating a pipe line?
Amazon web service Data pipeline
I do not think that Actors would be the right solution for this problem. The RunASync() method is hard to simulate in an Actor. You could use Timers and Reminders for that, but it feels unnatural. So I would go with a service for this one.
While trying to implement Service Fabric's Reliable Services pipeline, I had these three approaches to chose from:And it looks likeCis a good way to go.Details here.In this case I need to implement kind of message pump between worker services.For example I have 2 kinds of worker services. First one is IO-bound and scalability not required. Second is CPU-bound and scalability is required for it, so it uses a partitioning. I don't care what exactly partition will be used for process concrete item, so message pump must acts as a load balancer and enqueues item to CPU-bound service with minimum items in input queue. For now I've created a stateful service for this purpose.In this form this looks very similar to TPL Dataflow pipeline.My question is just am I using Service Fabric properly? Is there an overengineering here?Do Reliable Actors fits better for this kind of pipelines?(or part of pipeline)
Service Fabric: Reliable Services pipeline with partitions load balancing
When using the following timeline I get23cycles:
Closed.This question does not meetStack Overflow guidelines. It is not currently accepting answers.This question does not appear to be about programming within the scope defined in thehelp center.Closed4 years ago.Improve this questionConsider a 4 stage pipeline processor. The number of cycles needed by the four instructions I1, I2, I3, I4 in stages S1, S2, S3, S4 is shown below:S1 S2 S3 S4 I1 2 1 1 1 I2 1 3 2 2 I3 2 1 1 3 I4 1 2 2 2What is the number of cycles needed to execute the following loop?For (i=1 to 2) {I1; I2; I3; I4;}Options are:16232830My explanation :where I'm wrong ?
What is the number of cycles needed to execute the following loop? [closed]
A few things that I think could make you life a bit easier.Promotethe email address field in the incoming message using a promoted property schema. This way you'll have the email address available later on.Map the incoming message to a CSV format on the send port (you should map to your destination-format as late as possible in the process).Create a pipeline component that sets thefollowing propertieson you message. Make sure to create you component in a way that you can configure these properties run-time. Use theBizTalk Pipeline Component Wizard tool. The important properties are:SMTP.SubjectSMTP.FromSMTP.SMTPHostSMTP.SMTPAuthenticateSMTP.MessagePartsAttachmentsMake sure to the PartsAttachments as "1" to get the BizTalk massage body (your CSV in this case) as a attachment.Set the address to send to using you previously promoted property.Use for example theAntrix SMTP Serverfor developers app. This will basically snatch any messages sent to an SMTP server and store the files in a litte tray app. Nice while developing and testing.
Here is my situation:- I receive an XML which contains an email address in the field (ie[email protected]). - This XML is then mapped to a CSV (The Email is not mapped to the CSV and it does not contain this email address). - I then need to send this CSV as an attachment to the email which was contained in the original XML.What I have done before is sent email through an SMTP adapter and used a custom pipeline component to attach a file to an email. BUT, the reason I am not going into depth about how hard I have tried to figure this out, and all my code etc... is because with the company I am working for I am unable to access an SMTP server on my desktop. I can only deploy solutions and test the SMTP functionality on a test server (i cannot develop them/debug etc on the test server), which has basically made this particular project a massive headache. So I have tried a few things, but continuing I feel, without some help is a lost cause.Can someone please point me in the right direction, or the steps I should take (code would be amazing), the objects I might need in an orchestration, or anything that would help me?Thanks so much for your help in advance.
Biztalk: XML containing email mapped to CSV, send as attachment
isn't this related to the IIS 7 integrated pipeline?To be verified, but I think that thoses events are only triggered when IIS 7 is running in integrated pipeline.
When working with HTTP modules, has anyone noticed that the final two events in the pipeline -- PreSendRequestHeaders and PreSendRequestContent -- don't always run?I've verified that code bound to EndRequest will run, but will not when bound to either PreSendRequestHeaders or PreSendRequestContent.Is there a reason why? I thought perhaps it was a caching issue (with a 304 Not Modified, you don't actually send content...), but I've cleared caches and determined that the server is returning 200 OK, which would indicate that it sent content.This is a problem because the StatusCode of the response defaults to 200 and my understanding is that it doesn't get updated to something like a 404 or 206 until those two final methods. If I check the StatusCode during EndRequest, it will always read 200.
Why don't PreSendRequestHeaders and PreSendRequestContent run consistently?
If you know Ruby then a solution is to write a simple internal DSL that can generate whatever pipeline data types and reader/writer code you need. Generating XML is a quick way to get started. You can always change the DSL to generate another format later if required.You may also want to look at theMicrosoft Phoenix compilerproject for inspiration.
I'm developing a compiler framework for .NET and want a flexible way of defining pipelines. I've considered the following options:WWFCustom XML pipeline descriptionCustom pipeline description in code (using Nemerle's macros to define syntax for it)Other code-based descriptionRequirements:Must not depend on functionality only in the later versions of .NET (3+) since it's intended to be cross-platform and be used on top of managed kernels, meaning semi-limited .NET functionality.Must allow conditional pipeline building, so you can specify that certain command line options will correspond to certain elements and orders.WWF would be nice, but doesn't meet the first requirement. The others would work but are less than optimal due to the work involved.Does anyone know of a solution that will meet these goals with little to no modification?
Flexible compiler pipeline definitions
You could uselibrary(dplyr) data %>% group_by(a,b,c) %>% filter( d > quantile(d, na.rm = TRUE)[4] + 1.5 * IQR(d, na.rm = TRUE) | d < quantile(d, na.rm = TRUE)[4] - 1.5 * IQR(d, na.rm = TRUE))This returns you# A tibble: 2,464 x 5 ...1 a d b c <dbl> <chr> <dbl> <chr> <dbl> 1 10533 gas 321. CAISO 2011 2 10534 gas 51.8 CAISO 2012 3 15067 gas 52.6 CAISO 2013 4 25890 oil 51.0 ISONE 2010 5 26485 gas 416. PJM 2008 6 26489 gas 468. PJM 2012 7 38153 gas Inf SPP 2014 8 38154 gas Inf SPP 2015 9 38155 gas 67.4 SPP 2016 10 38156 gas 58.8 SPP 2017 # ... with 2,454 more rows
I have a dataset with four variables (a,b,c,d). I want to group the data by a,b,c then find out outliers for d.Here is the sample data:https://www.dropbox.com/s/ftp4eehqxzh7nn3/example.csv?dl=0I tried:outliers = data %>% group_by(a,b,c) %>% which (data$d > quantile (data$d, na.rm=T)[4] + 1.5*IQR(data$d, na.rm = T) | data$d < quantile (data$d, na.rm=T)[2] - 1.5*IQR(data$d, na.rm = T).However, I got errorargument to 'which' is not logical.Would appreciate if anyone can tell me what I got wrong and how should I fix the problem.
R using which function after group_by
This command probably creates an issue in the docker shell:yes | sam deployTry this command:sam deploy --no-confirm-changeset --no-fail-on-empty-changesetFromhttps://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-cli-command-reference-sam-deploy.html:--confirm-changeset | --no-confirm-changeset Prompt to confirm whether the AWS SAM CLI deploys the computed changeset. --fail-on-empty-changeset | --no-fail-on-empty-changeset Specify whether to return a non-zero exit code if there are no changes to be made to the stack. The default behavior is to return a non-zero exit code.
When I am trying to commit changes to gitlab for continuous integrations i am facing this error even though all my steps pass successfully, Gitlab CI shows thisCleaning up file based variables 00:01 ERROR: Job failed: exit code 1I am running 1 stages "deploy" at the moment here is my script for deploy:image: python:3.8 stages: - deploy default: before_script: - wget https://golang.org/dl/go1.16.5.linux-amd64.tar.gz - rm -rf /usr/local/go && tar -C /usr/local -xzf go1.16.5.linux-amd64.tar.gz - export PATH=$PATH:/usr/local/go/bin - source ~/.bashrc - pip3 install awscli --upgrade - pip3 install aws-sam-cli --upgrade deploy-development: only: - feature/backend/ci/cd stage: deploy script: - sam build -p - yes | sam deploy
.gitlab-ci.yaml throws "Cleaning up file based variables 00:01 ERROR: Job failed: exit code 1" at the end after successfully run the job
The Spark activity in a Data Factory pipeline executes a Spark program on your own or on-demand HDInsight cluster. Thisarticlebuilds on the data transformation activities article, which presents a general overview of data transformation and the supported transformation activities. When you use an on-demand Spark linked service, Data Factory automatically creates a Spark cluster for you just-in-time to process the data and then deletes the cluster once the processing is complete.Upload "Sales_Data_Aggregation_2.0_blob.py" to storage account attached to the HDInsight cluster and the modify the sample definition of a spark activity and create a schedule trigger and run the code:Here is the sample JSON definition of a Spark activity:{ "name": "Spark Activity", "description": "Description", "type": "HDInsightSpark", "linkedServiceName": { "referenceName": "MyHDInsightLinkedService", "type": "LinkedServiceReference" }, "typeProperties": { "sparkJobLinkedService": { "referenceName": "MyAzureStorageLinkedService", "type": "LinkedServiceReference" }, "rootPath": "adfspark", "entryFilePath": "test.py", "sparkConfig": { "ConfigItem1": "Value" }, "getDebugInfo": "Failure", "arguments": [ "SampleHadoopJobArgument1" ] } }Hope this helps.
I'm trying to reproduce the following architecture based on the following github repo:https://github.com/Azure/cortana-intelligence-price-optimizationThe problem is the part linked to the ADF, since in the guide it uses the old version of ADF: I don't know how to map in ADF v2 the "input" and "output" properties of a single activity so that they point to a dataset.The pipeline performs a spark activity that does nothing more than execute a python script, and then I think it should write data into the dataset I defined already.Here is the json of the ADF V1 pipeline inside the guide, which I cannot replicate:"activities": [ { "type": "HDInsightSpark", "typeProperties": { "rootPath": "adflibs", "entryFilePath": "Sales_Data_Aggregation_2.0_blob.py", "arguments": [ "modelsample" ], "getDebugInfo": "Always" }, "outputs": [ { "name": "BlobStoreAggOutput" } ], "policy": { "timeout": "00:30:00", "concurrency": 1, "retry": 1 }, "scheduler": { "frequency": "Hour", "interval": 1 }, "name": "AggDataSparkJob", "description": "Submits a Spark Job", "linkedServiceName": "HDInsightLinkedService" },
Azure Data Factory V2 - Input and Output
Try here this might help you achieve it, you can use the playground.https://jenkinsci.github.io/job-dsl-plugin/#method/javaposse.jobdsl.dsl.helpers.workflow.WorkflowDefinitionContext.cpsScm
I am trying to create pipeline jobs with jenkins dsl. the pipeline job takes the cpsscm if I specify the git url only without branches or credentials. but when I change the brancha nd add credentials, it doesn;t workpipelineJob("foo"){ definition { cpsSCM { git(GIT_URL,BRANCH) } } }The above works. but the following doesn't workpipelineJob("foobar"){ definition { cpsScm { scm{ git{ branch(BRANCH) remote{ credentials('kjsks2304-sid34-234') url(GIT_URL) } } } scriptPath("JenkinsFile") } } } }the credentials is the id in the credentials plugin in jenkins. The git repo I am using is a private bitbucket repository
Pipeline cpsSCM doesn't take url
-1Seems to be already handled for the single column caseUsng same Label Encoder to test dataset? or new Label Encoder?So I used the aforementioned multi-column solution should which worked fine.
Ho can I persistently encode the same String to the same column?Label encoding across multiple columns in scikit-learnpropose a nice way to handle a data frame with multiple categorical values. However, I am unsure if this correctly persists (in a pickle) and would apply the same labels again for freshly incoming data.So far I used pandas directly and obtained the labels via.cat.codesof the category values. But Now I need to integrate label encoding into a pipeline to deal with fresh incoming data.Would something likele = LabelEncoder() for col in df.select_dtypes([], ['object'].columns: df[col] = le.fit_transform(df[col])Or the proposed solution of theMultiColumnLabelEncodersuffice for my task?
persistent label encoding in sklearn pipeline
You are basically asking how to create your own (Third party) service on Bluemix. A good starting point could beExtend your reach: offer your cloud services in the IBM marketplace. Please note that service is not automatically accepted. It needs to fill basic quality, security and legal requirements.
I have a Java service to deploy to Bluemix which is a basic-auth protected REST service. The username and password is required in configuration.I have other Java services that are also deployed to Bluemix which I would like to use this (so the url/username/password is required)I'd like to only have one place where the username and password is maintained. I am able to create a user-provided service which contains the username and password, and bind that to both the service and the client-service, but this seems messy.In addition to that, I would have to manually maintain the URL of the service, given that this is "provided" to me by Bluemix, I think it's a bit silly.Is there a way I can specify to push an app to Bluemix, so it is also available as a service to bind?
Service which is also a user-provided service?
Write top1.stdinand then close it before callingp2.communicate():In [1]: import subprocess In [2]: %cpaste Pasting code; enter '--' alone on the line to stop or use Ctrl-D. :p1 = subprocess.Popen(['cat'], : stdin=subprocess.PIPE, : stdout=subprocess.PIPE) :p2 = subprocess.Popen(['head', '-n', '1'], : stdin=p1.stdout, : stdout=subprocess.PIPE) :p1.stdout.close() :-- In [3]: p1.stdin.write(b'This is the first line.\n') Out[3]: 24 In [4]: p1.stdin.write(b'And here is the second line.\n') Out[4]: 29 In [5]: p1.stdin.close() In [6]: p2.communicate() Out[6]: (b'This is the first line.\n', None)(Don't forget the newlines in the data you send tocat, or it won't work.)
I see this code snippet referenced quite a lot during discussions around Python subprocess pipelines. Obligatory link:https://docs.python.org/3.4/library/subprocess.html#replacing-shell-pipelineModified slightly:p1 = subprocess.Popen(['cat'], stdin=subprocess.PIPE, stdout=subprocess.PIPE) p2 = subprocess.Popen(['head', '-n', '1'], stdin=p1.stdout, stdout=subprocess.PIPE) # Allow p1 to receive a SIGPIPE if p2 exits. p1.stdout.close() output = p2.communicate()[0]This shell pipeline is pointless, except to succinctly demonstrate the challenge. Input"abc\ndef\nghi\n"and only"abc\n"should be captured inoutput.What is the best way to write data top1.stdin? I am aware of theinputargument tosubprocess.Popen.communicate(), but it won't work in a pipeline. Also, the solution needs to handling blocking correctly.My guess: Reverse engineer the code behindcommunicate()and create another version for this specific issue. Before I do that, I want to ask if there is a simpler solution of which I am not aware.
How to write data to stdin of the first process in a Python shell pipeline?
Let's consider a MIPS in which forwarding is activated. I think that in that case no hazard occurs: in fact the ADD instruction is an integer operation that in the MIPS architecture requires only one clock cycle. Look at this graph:ADD $t3,$t1,$t2 IF ID EX MEM WB SW $t3,12($t0) IF ID EX MEM WBAs you can see no hazard occurs because the SW instruction stores the datum after two clock cycles since the result is put in $t3 by ADD.Actually in similar situations a hazard can occur but only if the unit is a multicycle one (if it requires more than one clock cycle to compute the data). Look ad this example, in which the ADD.D instruction uses a floating point adder that requires 4 clock cycles to perform the calculation:ADD.D F2,F4,F5 IF ID A0 A1 A2 A3 MEM WB S.D F2,somewhere IF ID EX X0 X1 X2 MEM WBX0 and X1 are RAW stalls while X2 is a structural stalls: in the former case S.D must wait for ADD.D to finish; in the latter your MIPS cannot access in the same clock cycle to the memory two times, so a structural stall occurs.
On MIPS architecture with pipelining and forwarding:add $s0, $t1, $t2 sw $s0, 0($sp)The add instruction will have the result ready at step 3 (execute operation) however I presume that the sw instruction want the result at step 2 (Instruction decode & register read).There is a solved exercise in the book Computer Organization and Design by David A. Patterson:Find the hazards in the following code segment and reorder the instructions to avoid any pipeline stalls:lw $t1, 0($t0) lw $t2, 4($t0) add $t3, $t1,$t2 sw $t3, 12($t0) lw $t4, 8($01) add $t5, $t1,$t4 sw $t5, 16($t0)Solution:lw $t1, 0($t0) lw $t2, 4($t1) lw $t4, 8($01) add $t3, $t1,$t2 sw $t3, 12($t0) add $t5, $t1,$t4 sw $t5, 16($t0)In the solution it correctly recognizes the load-use hazard and rearranges the code accordingly, but is there an execute-store hazard as well?
Is there an execute-store data hazard in MIPS?
The short answer is:root@debian: yes y y ... ^C root@debian:;-)
This is a pointless task, I know. But I'm just messing around and trying to familiarize myself.I thought this might work, but no:root@debian: Jibberish 2> file.txt && file.txt < /dev/tty0I thought this might generate an error message which would then be sent to file.txt which would in turn be the input back into the shell (/dev/tty0).Anyway, if anyone knows how to make an infinite loop using just redirects and pipelines I'd be interested to know.Thanks
Is an infinite loop possible in Unix shell using just redirection and pipelines?
Data hazards: (a) Line 3 needs to wait for line 2 to evaluate the value of $4 (in the first iteration) (b) Line 4 needs to wait for line 2 to evaluate the value of $4 (every iteration) (c) Line 5 needs to wait for line 4 to evaluate the value of $5 (every iteration) (d) Line 8 needs to wait for line 7 to evaluate the value of $2 Control hazard (a) Line 8 will stall while determining if $2 is equal to $0 Moving lines 6 and 7 to between lines 4 and 5 (alternatively moving line 5 to between line 7 and 8) and swapping the order, i.e. line 7 before line 6, would provide the most savings with stalls, because that stall occurs on each iteration of the loop. The swap is necessary to avoid the data hazard with line 8.
I am wondering if someone could verify my answer to this question please! I have a midterm next week and the TA has not posted solutions to this question yet:Consider the following MIPS assembly code and identify all pipeline hazards under the assumption that there are no pipeline optimizations implemented- including forwarding. The first column of numbers are line numbers that you can refer to in your explanation.1. addi $3, $0, 100 2. addi $4, $0, 0 3. loop: addi $4, $4, 4 4. add $5, $4, $3 5. sw $5, 100($4) 6. addi $1, $1, -1 7. slt $2, $4, $3 8. bne $2, $0, loop 9. jr $31Reorder the instructions to reduce the number of stalls to a minimumMy answer:Moving from line 2 to line 3 (from outside loop to inside), there is a hazard because $4 needed on line 3 for addition is dependent on the value set in $4 on line 2.Line 4 has a hazard because it is dependent on the value set for $4 in line 3.Line 5 has a hazard because it is dependent on the value set for $4 in line 4.Line 8 has a hazard because it is dependent on the value set for $2 in line 7.Reordered instructions:addi $4, $0, 0 2 addi $3, $0, 100 1 loop: addi $4, $4, 4 3 addi $1, $1, -1 6 add $5, $4, $3 4 slt $2, $4, $3 7 sw $5, 100($4) 5 bne $2, $0, loop 8 jr $31 9
Pipeline Hazards
Here is the framework exactly for this purpose (CalledDexecutor), You can referthisandthisDzone articles for this use case example. For workflow like usecase referthis.Here is how you can do using Dexecutor.DexecutorConfig<String, String> config = new DexecutorConfig<>(executorService, new TaskProvider()); DefaultDexecutor<String, String> executor = new DefaultDexecutor<String, String>(config); executor.addDependency("A", "E"); executor.addDependency("B", "F"); executor.addIndependent("C"); executor.addIndependent("D"); executor.execute(ExecutionConfig.NON_TERMINATING);Disclaimer : I am the owner of this framework
I have a system that has a bunch of activities (around 40). Each of the activities either call a service or perform some computation. This system has been written in Java. Currently all these activities are executed sequentially and the entire process takes about 2 - 3 seconds. I am trying to optimize the system and try to reduce the latency. I noticed that some of the activities have a data dependence and some of them are independent. I am trying to make these activities run in parallel while also maintaining a sequence for activities that have a data dependence. For example, assume activities 'A' through 'F' are being executed sequentially in this order :A->B->C->D->E->F (Activities) 1 2 3 4 5 6 (Time Units)Assume that the data produced by A is used by E and the data produced by B is used by F and the rest of the activities do no depend on any other data. Instead of running these activities sequentially, i should be able to run them parallelly in this order -A->E B->F C D 1 2 (Time)So instead of 6 time units, the system should be able to complete the entire process in 2 time units. Is there any Open source Java framework that i can use to handle such a workflow and can seamlessly execute activities once data is available?
Asynchronous open source workflow software in Java
I don’t have the SVN stuff to try the same here. But, from what I see, you are missing aForEach-Object(aliases%andforeach) after the pipe.Try thissvn list svn://server/repository/myPath | ForEach-Object { $_.TrimEnd("/") }orsvn list svn://server/repository/myPath | % { $_.TrimEnd("/") }
Is there a possibility to manipulate the items in a pipeline of PowerShell?In more concrete words: I start my pipeline with an "svn list". This returns me a list of paths in my repository, all directories with a trailing "/". The list of paths should be stored in an array, but without the "/".This:svn list svn://server/repository/myPath | $_.TrimEnd("/")does not work becauseTrimEndis an expression and may not be used within a pipeline.The result of the pipeline should be something like:$a = @("foo", "bar)
Using a substring in a PowerShell pipeline
In the example you gave, I would add an additional step usingsklearn.model_selection.train_test_split:folds = 4 X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=(1/folds), random_state=0, stratify=y) scalar = StandardScaler() clf = svm.LinearSVC() pipeline = Pipeline([('transformer', scalar), ('estimator', clf)]) cv = KFold(n_splits=(folds - 1)) scores = cross_val_score(pipeline, X_train, y_train, cv = cv)I think best practice is to only use the training data set (i.e.,X_train, y_train) when tuning the hyperparameters of your model, and the test data set (i.e.,X_test, y_test) should be used as a final check, to make sure your model isn't biased towards the validation folds. At that point you would apply the samescalerthat you fit on your training data set to your testing data set.
I previously saw apostwith code like this:scalar = StandardScaler() clf = svm.LinearSVC() pipeline = Pipeline([('transformer', scalar), ('estimator', clf)]) cv = KFold(n_splits=4) scores = cross_val_score(pipeline, X, y, cv = cv)My understanding is that: when we apply scaler, we should use3out of the 4 folds tocalculatemean and standard deviation, then we apply the mean and standard deviation to all 4 folds.In the above code, how can I know that Sklearn is following the same strategy? On the other hand, if sklearn is not following the same strategy, which means sklearn wouldcalculatethe mean/std from all 4 folds. Would that mean I should not use the above codes?I do like the above codes because it saves tons of time.
Using scaler in Sklearn PIpeline and Cross validation
You usewc(word count)with the-woption to count words in a file or-lfor lines.$ cat file.txt this file has 5 words. $ wc -w file.txt # Print number of words in file.txt 5 file.txt $ wc -l file.txt # Print number of lines in file.txt 1 file.txt $ wc file.txt # No option, print line, words, chars in file.txt 1 5 23 file.txtOther options towc:-c, --bytes print the byte counts -m, --chars print the character counts -l, --lines print the newline counts -L, --max-line-length print the length of the longest line
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened,visit the help center.Closed11 years ago.How can i write a program using STANDARD UNIX UTILITIES that will read data from standard input one character at a time and out the results to standard output. I know that it runs similar to what a C program in this case. I was told that this could be done in one line of code but never done Unix Pipeline Programming so i am just curious.The purpose of the program is to read data from standard input and count the number of words an lines and out in standard output the total number of words and linesI came up with the following code but i am unsure:tr A-Z a-z < file | tr -sc a-z | sort uniq -c wc '\n'Any thoughts or suggestions on how i can get what i want?
count number of words and lines from stdin [closed]
In my opinion the code is much clearer if you bind the intermediate value withlet.let useCase someValue (valueA: TypeA) (valueB: TypeB) = let myValue = someValue |> ... |> toMyType myFun myValue valueA valueBYou can also use backward pipes as followslet useCase someValue (valueA: TypeA) (valueB: TypeB) = someValue |> ... |> toMyType |> myFun <| valueA <| valueB
I came across the need for a function with the signature'a -> 'b -> ('a -> 'b -> 'c) -> 'cto use for applying two arguments when piping:let apply2 x y f = f x yI needed this because I am using a functionmyFun : MyType -> TypeA -> TypeB -> ResultTypeand I use it in another function like this:let useCase someValue (valueA: TypeA) (valueB: TypeB) = someValue |> ... |> toMyType |> myFun |> apply2 valueA valueBapply2fits the bill, but I can't shake the feeling that I could use a built-in function or operator or that I am missing some more fundamental way of doing this (barring lambdas, which IMHO reads worse in this case). Note that I can't easily switch the parameter order ofmyFun(it's a GiraffeHttpHandler, so the last two parameters have to beHttpFuncandHttpContext, designated byTypeAandTypeBabove).Is theapply2function with the signature I've described a fair thing to use in functional programming, or am I missing something obvious? If this is a well-known concept, does it have a better name?
Is "a -> b -> (a -> b -> c) -> c" to apply two parameters a standard functional concept?
Yes, where possible, the compiler will optimize away the operators just like it does with|>.Just as a fun example, the compiler will optimize this codelet add a b = a + b let test() = (1,2) ||> addto this3And even if you parameterisedtestlet test t = t ||> addit would compile to this C# equivalentint test(int x, int y) { return x + y; }In real, more complex code, you might not see such extreme optimizations but it gives you an idea of what the compiler can do.
This question basically expands onthis questionwhere the answer is that the single argument pipeline operator|>is compiled to the same CIL as the unpiped version. But what about||>and|||>? MSDN suggests that real tuples are used to wrap the arguments.But do||>and|||>really allocate a .NET Tuple to wrap the arguments and then unwrap them againjust pass them to a function or is the compiler optimized to handle these operators by just rewriting the CIL like it does with|>?Update:Thank you for the answers. It depends on whether the--optimize+parameter is passed to the F# compiler.Default release builds in with F# 3.1 in Visual Studio 2013 don't create tuples. Default debug builds do create tuples.
Performance implications of ||> and |||> pipeline operators in F#
You can use snakemake's ability to take functions as input and put the if loop in a function. A sample implementation could be as followsdef get_input(wildcards): input_list = [] if config["DEG"]["exec"]: input_list.append("DEG/DEG.txt") if config["DTU"]["exec"]: input_list.append("DTU/DTU.txt") return input_list rule all: input: get_inputYou can customize theget_inputfunction to include additional conditions if required. This is documented furtherhere.Another alternate way of doing this which isfar less readable and not recommendedbut can work incase an addtional function is to be avoided is as followsrule all: input: lambda wildcards: "DEG/DEG.txt" if config["DEG"]["exec"] else [], lambda wildcards: "DTU/DTU.txt" if config["DTU"]["exec"] else [],
In my Snakemake project, I have a config.yaml file which allows users to run certain steps of the pipeline or not, for example:DEG : exec : TrueAnd so, in the Snakefile, I include the rules associate with the DEG:if config["DEG"]["exec"]: include: "rules/classic_mapping.smk" include: "rules/counts.smk" include: "rules/run_DESeq2.smk"The problem is, now I would like to dynamically specify the output files in the "all" rule, so that Snakemake knows which files to generate based on the parameters entered by the user. For example I had in mind to proceed as follows:rule all: input: if config["DEG"]["exec"]: "DEG/DEG.txt" if config["DTU"]["exec"]: "DTU/DTU.txt"but it doesn't work: SyntaxError in line 58 of Unexpected keyword if in rule definition (Snakefile, line 58)I would need an outside point of view to find an alternative because Snakemake should not work on this wayThank's by advance
Put optional input files for rule all in Snakemake
There's a way to do that withonly:, but I'd suggest moving torules:asonly:is going to be deprecated. So you will not need two jobs with different conditions, you can do a branching condition:stages: - firstjob - test - build - deploy workflow: rules: - if: $CI_MERGE_REQUEST_IID - if: $CI_COMMIT_TAG - if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH FirstJob: stage: firstjob script: - echo "Hello Peoples!" - sleep 1 rules: - if: $CI_PIPELINE_SOURCE == "schedule" # when: always # is a default value - when: manual # allow_failure: false # is a default value AC-test: needs: [FirstJob] stage: test script: - echo "AC Test is running" - sleep 10 ProdJobBuild: stage: build needs: [AC-test] script: - echo "Building thing to prod"With it, the pipeline checks if the job is called by a schedule, and runs. And if not, staysmanual.*I took the freedom to pick theMR-style of workflowto avoid the double pipelines.
I have a little problem with my GitLab pipeline.I would like to run a manual job with scheduled rule or find a way to run a scheduled pipe with my jobs without rewrite the pipe.As you see in the example I have 2firstjobtagged job. One is manually and one is scheduled. My problem that if I run the scheduled workflow, the AC-test won't start and if I try to run theFirstJobby scheduled rule, it won't start because ofwhen: manualsection.Here is my example:stages: - firstjob - test - build - deploy FirstJob: stage: firstjob script: - echo "Hello Peoples!" - sleep 1 when: manual allow_failure: false FirstJobSchedule: stage: firstjob script: - echo "Hello Scheduled Peoples!" - sleep 1 only: - schedule allow_failure: false AC-test: needs: [FirstJob] stage: test script: - echo "AC Test is running" - sleep 10 ProdJobBuild: stage: build needs: [AC-test] script: - echo "Building thing to prod" ProdJobDeploy: stage: deploy needs: [ProdJobBuild] script: - echo "Deploying thing to prod"Is there a possibility to solve this problem somehow?Did somebody ever suffer from this problem?
GitLab manual job triggered by schedule
With a scripting language like python (or php), things are not compiled down to bytecode like in .net or java.Wrong: everything youimportin Python gets compiled to bytecode (and saved as.pycfiles if you can write to the directory containing the source you're importing -- standard libraries &c are generally pre-compiled, depending on the installation choices of course). Just keep the main script short and simple (importing some module and calling a function in it) and you'll be using compiled bytecode throughout. (Python's compiler is designed to be extremely fast -- with implications including that it doesn't do a lot of otherwise reasonable optimizations -- but avoiding it altogether is still faster;-).
With a scripting language like python (or php), things are not compiled down to bytecode like in .net or java.So does this mean that on every request, it has to go through the entire application and parse/compile it? Or at least all the code required for the given call stack?
How exactly does a python (django) request happen? does it have to reparse all the codebase?
Note that if your VCF files are actuallybgzipcompressed andtabixindexed, you could instead use thefromFilePairsfactory method to create your input channel. For example:params.vcf_files = "./input_files/*.vcf.gz{,.tbi}" params.results_dir = "./results" process FILTERING { tag { sample } publishDir("${params.results_dir}/after_filtering", mode: 'copy') input: tuple val(sample), path(indexed_vcf) output: tuple val(sample), path("${sample}.filtered.vcf") """ vcftools \\ --vcf "${indexed_vcf.first()}" \\ --mac 1 \\ --minQ 20 \\ --recode \\ --recode-INFO-all \\ --out "${sample}.filtered.vcf" """ } workflow { vcf_files = Channel.fromFilePairs( params.vcf_files, checkIfExists: true ) FILTERING( vcf_files ).view() }Results:$ nextflow run main.nf N E X T F L O W ~ version 22.10.0 Launching `main.nf` [thirsty_torricelli] DSL2 - revision: 8f69ad5638 executor > local (3) [7d/dacad6] process > FILTERING (C) [100%] 3 of 3 ✔ [A, /path/to/work/84/f9f00097bcd2b012d3a5e105b9d828/A.filtered.vcf] [B, /path/to/work/cb/9f6f78213f0943013990d30dbb9337/B.filtered.vcf] [C, /path/to/work/7d/dacad693f06025a6301c33fd03157b/C.filtered.vcf]Note thatBCFtoolsis actively maintained and is intended as a replacement for VCFtools. In a production pipeline, BCFtools should be preferred.
I have a nextflow script that runs a couple of processes on a single vcf file. The name of the file is 'bos_taurus.vcf' and it is located in the directory /input_files/bos_taurus.vcf. The directory input_files/ contains also another file 'sacharomyces_cerevisea.vcf'. I would like my nextflow script to process both files. I was trying to use a glob pattern like ch_1 = channel.fromPath("/input_files/*.vcf"), but sadly I can't find a working solution. Any help would be really appreciated.#!/usr/bin/env nextflow nextflow.enable.dsl=2 // here I tried to use globbing params.input_files = "/mnt/c/Users/Lenovo/Desktop/STUDIA/BIOINFORMATYKA/SEMESTR_V/PRACOWNIA_INFORMATYCZNA/nextflow/projekt/input_files/*.vcf" params.results_dir = "/mnt/c/Users/Lenovo/Desktop/STUDIA/BIOINFORMATYKA/SEMESTR_V/PRACOWNIA_INFORMATYCZNA/nextflow/projekt/results" file_channel = Channel.fromPath( params.input_files, checkIfExists: true ) // how can I make this process work on two files simultanously process FILTERING { publishDir("${params.results_dir}/after_filtering", mode: 'copy') input: path(input_files) output: path("*") script: """ vcftools --vcf ${input_files} --mac 1 --minQ 20 --recode --recode-INFO-all --out after_filtering.vcf """ }
Nextflow script to process all files in given directory
$line.Split(',')[3].Split('=')[1]
I have the following code which works but I am looking for a way to do this all inline without the need for creating the unnecessary variables$myArray1and$myArray2:$line = "20190208 10:05:00,Source,Severity,deadlock victim=process0a123b4"; $myArray1 = $line.split(","); $myArray2 = $myArray1[3].split("="); $requiredValue = $myArray2[1];So I have a string$linewhich I want to:split by commas into an array.take the fourth item[3]of the new arraysplit this by the equals sign into another arraytake the second item of this array[1]and store the string value in a variable.I have tried usingSelect -indexbut I haven't been able to then pipe the result and split it again.The following works:$line.split(",") | Select -index 3However, the following results in an error:$line.split(",") | Select -index 3 | $_.split("=") | Select -index 1Error message:Expressions are only allowed as the first element of a pipeline.
Create and split an array twice all inline in Powershell
Just "output" the object from the function like so:function RunIE { $ie = New-Object -ComObject InternetExplorer.Application Write-Output $ie }or more idiomaticallyfunction RunIE { New-Object -ComObject InternetExplorer.Application }Then assign the output to a variable in your main script:$ie = RunIEShareFolloweditedNov 16, 2010 at 17:23answeredNov 16, 2010 at 16:55Keith HillKeith Hill198k4343 gold badges355355 silver badges372372 bronze badges51To complete the answer, you might add. RunIE:)–stejNov 16, 2010 at 16:591I assumed that. C:\MySystem\Functions.ps1already dot sourced the RunIE function. In that case,$ie = RunIEis sufficient.–Keith HillNov 16, 2010 at 17:162I meant "you might add also note that. RunIEis possible. I was talking about the original example where the function looked likefunction RunIE { $ie = .. }.–stejNov 16, 2010 at 17:58Oh, gotcha. That does seem kind of gross though - promoting function scope variables to the current scope that way? :-)–Keith HillNov 17, 2010 at 0:54I don't say it is the best solution. I'll just add my comment so that I'm satisfied and you don't have to edit your answer ;)–stejNov 17, 2010 at 9:11Add a comment|
How it is possible to call a function from a other PowerShell script an returning the object?Main Script:# Run function script . C:\MySystem\Functions.ps1 RunIE $ie.Navigate("http://www.stackoverflow.com") # The Object $ie is not existingFunctions Script:function RunIE($ie) { $ie = New-Object -ComObject InternetExplorer.Application }
PowerShell, Calling a function from a other PS Script and returning a Object
I would rather put the before script commands in aseparate stage, and keep the rules in the first stage.That way, the first stage triggers only if the rules match.It can run the commands of the formerly "before_script".And the next stages can go on with your script.ShareFollowansweredJun 1, 2022 at 7:10VonCVonC1.3m539539 gold badges4.6k4.6k silver badges5.4k5.4k bronze badges3Thank you not sure I follow before_script: stage: .pre - apt update && apt upgrade -y - apt install -y but then how can I run that if i need it for a job ?–sdblepasJun 1, 2022 at 7:25I meant, drop the before script, and execute its commands in a run section of the first stage.–VonCJun 1, 2022 at 7:41Ok I think I get it. To not make that umoungus i'll use an anchor. Now I need to see if I can use the anchor within an if–sdblepasJun 1, 2022 at 8:51Add a comment|
in gitlab-ci I havebefore_script: - apt update && apt upgrade -y - apt install -yand in my job on stages I added a rulemerge_request: stage: test rules: - if: $CI_PIPELINE_SOURCE == "merge_request_event" && $CI_MERGE_REQUEST_TARGET_BRANCH_NAME != "master" - if: $CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "master"what's happening is that the job is triggered in his stage as the before script is running, then it gets to the rule and sees that there's nothing to do. That slows down the pipeline. Is there a way to have the rule before the "before_script"here's the pipelineThank you
In Gitlab-ci is there a way to run before scripts only after rule is match
I think maybe the cleanest approach is to use aFunctionTransformer. Note in particular that the default value of the parameterfuncgives you an "identity transformer":[...] If func is None, then func will be the identity function.ShareFollowansweredSep 2, 2021 at 20:43Ben ReinigerBen Reiniger11.4k33 gold badges1818 silver badges3232 bronze badges0Add a comment|
I am trying to join the result of a PCA to the original features, to do this I tried aFeatureUnionof the PCA with a column transformer that justpassthroughall columnsfeature_selector = FeatureUnion( [ ("original", make_column_transformer(('drop', []), reminder='passthrough'), ("pca", PCA()) ]) my_pipeline = make_pipeline(preprocessor, feature_selector, model)But this seems a bit counter intuitive.Is there is any cleaner way of doing this? maybe a feature selector that select all columns instead of column transformer?
passthrough all columns in sklearn pipeline
Are you setting the right variable? I think you can set theRUNNER_ALLOW_RUNASROOTvariable to get by this problem usingexport RUNNER_ALLOW_RUNASROOT=1, or you can provide it directly to the command:RUNNER_ALLOW_RUNASROOT="1" ./config.sh --url https://github.com/basobaasnepal/BasobaasWeb --token DFGFSDF234sf3fg45hdShareFollowansweredMar 3, 2021 at 20:40Thomas PortwoodThomas Portwood1,06188 silver badges1313 bronze badges1Can you give link to the documentation where you foundRUNNER_ALLOW_RUNASROOTenvironmental variable?–harshrathod50Dec 5, 2023 at 3:43Add a comment|
Hi I am new in github actions and I am trying to create a CICD pipline using Github action. I am using a digital ocean droplet as my server and I am trying to create a runner as said in github->settings->actionsWhen I wrote the following command./config.sh --url https://github.com/basobaasnepal/BasobaasWeb --token DFGFSDF234sf3fg45hdI got this: Must not run with sudoI tried to change the from root user to non root user but didn't work. I also triedexport {AGENT_ALLOW_RUNASROOT="1"}bur
Must not run with sudo
You can try the following :def isDirEmpty() { def myDirectory = sh(script: "ls", returnStdout: true).trim() println(myDirectory) return null == myDirectory || "".equals(myDirectory) }using the sh is the safest way of finding this thing out (from my experience)ShareFolloweditedJul 8, 2021 at 12:34Jérémie Bertrand3,04533 gold badges4545 silver badges5454 bronze badgesansweredDec 5, 2018 at 16:29etlshetlsh70111 gold badge88 silver badges1818 bronze badges2Do you know if its possible with declerative pipline ?–M. AntonyDec 7, 2018 at 7:48Yep it does ( we use something like this)–etlshDec 8, 2018 at 8:08Add a comment|
I'm looking for a way to tell Jenkins in a stage that if there is no file in a particular folder he will cancel the job and mark it as unstable.Could someone possibly help me? I think the whole thing can be solved with an if else query.stage('Building') { if nothing in the folder { exit echo '[FAILURE] Failed to build' currentBuild.result = 'FAILURE' } }
Jenkins Pipeline Exit if nothing in the folder
You always have to quote expressions with the*character on the command line, to avoid local shell expansion. The correct syntax is this:--paths '/*'Otherwise you are trying to invalidate names based on what's in the root directory on your local filesystem (as captured by the*, expanded by the shell).ShareFollowansweredNov 23, 2018 at 2:19Michael - sqlbotMichael - sqlbot174k2727 gold badges367367 silver badges440440 bronze badges1If you're running into issues on PowerShell try--paths \"/*\". Source:stackoverflow.com/a/48104731/11993942–radihuqJan 17, 2021 at 17:26Add a comment|
I have created a Jenkins job that invalidate the cache each time that my frontend project is deployed. The issue is that although the AWS Website display that the cache is invalidating, when the job finish, the cache isnt completly cleaned, so I need to invalidate it manually through the AWS Website...The way to invalidate the cache automatically that I used is throughaws containerwhere I execute the following command:aws cloudfront create-invalidation --distribution-id ${DISTRIBUTION_ID} --paths /* > output.jsonThe output file will contain a json where I can get differents keys: values. Two of they that I use isIdandStatus. Once the invalidation was created, I another pipeline step I execute the following:aws cloudfront get-invalidation --distribution-id ${DISTRIBUTION_ID} --id ${id_invalidator} > status_invalidation.jsonWith the previously command I quest to the API each 50 second (through asleep 50) the status of the invalidation. When the validation return a `Status = Completed', the job is finished. This condition are inside a while loop.Someone know why this is happened?
Why aws cli dont invalidate correctly the cache - AWS Cloudfront
Unfortunately, that is the expected behavior.The Pipeline UI in BizTalk Administrator is completely different from the UI in Visual Studio and the extended controls are only supported in Visual Studio.ShareFollowansweredApr 2, 2014 at 14:32DTRTDTRT11k1313 silver badges2222 bronze badgesAdd a comment|
i'm trying to add a drop down design-property into a pipeline component. I found this articlehttp://social.msdn.microsoft.com/Forums/en-US/dd732ffc-0372-4710-a849-370bbdb65419/custom-pipeline-component-with-an-enum-property-to-display-a-custom-drop-down-list?forum=biztalkgeneraland i followed all steps. The result is that i can see drop down into pipeline properties in visual studio but when i associate it to receive port i can only see text box and not dropdown property.
How to add drop down property into biztalk pipeline component
One way to do this is to output a custom object after collecting the properties you want. Example:Get-WmiObject -Class Win32_Service | foreach-object { $displayName = $_.DisplayName $processID = $_.ProcessID $process = Get-Process -Id $processID new-object PSObject -property @{ "DisplayName" = $displayName "Name" = $process.Name "CPU" = $process.CPU } }ShareFollowansweredAug 15, 2013 at 14:49Bill_StewartBill_Stewart23.6k55 gold badges5151 silver badges6565 bronze badges0Add a comment|
Suppose I have the following PowerShell script:Get-WmiObject -Class Win32_Service | Select DisplayName,@{Name="PID";Expression={$_.ProcessID}} | Get-Process | Select Name,CPUThis will:Line 1: Get all services on the local machineLine 2: Create a new object with the DisplayName and PID.Line 3: Call Get-Process for information about each of the services.Line 4: Create a new object with the Process Name and CPU usage.However, in Line 4 I want to also have the DisplayName that I obtained in Line 2 - is this possible?
Select-Object with output from 2 cmdlets
Consider simply to use the-nameswitch of theGet-ChildItem(akals,dir):ls DIRECTORY -recurse -include PATTERN -nameThis way is native, clean, and effective.ShareFollowansweredJul 14, 2011 at 12:01Roman KuzminRoman Kuzmin41.2k1111 gold badges9696 silver badges119119 bronze badges1Yes, this is really effective -- KISS! I think I'll be using this.–hibbeligJul 14, 2011 at 16:41Add a comment|
I like to use the following code to emulate the Unix "find" behavior:ls DIRECTORY -recurse -include PATTERN | foreach { "$_" }In fact, there are a couple of other commands that I'd like to append this| foreach { "$_" }to. So I'm trying to find a way to make this easier to type. I tried stuff like this:function xfind { ls $args | foreach { "$_" } }And then I invoked it like so:xfind DIRECTORY -recurse -include PATTERNBut that seemed to do the wrong thing...
Alias for "| foreach { "$_" }" in PowerShell
As a disclaimer, I've never worked with a real MIPS machine, but I imagine that using a branch delay slot foranotherbranch will almost certainly cause problems. One common practice on processors like MIPS is to use the branch delay slot for a no-op, such asori $0, $0, 0, just to make sure that nothing executes that isn't supposed to.ShareFollowansweredNov 28, 2010 at 6:44Justin Spahr-SummersJustin Spahr-Summers16.9k22 gold badges6262 silver badges7979 bronze badges3I know that I'm not supposed to do something like that, and I wont do it. I'm just trying to understand more about the delay slot. I mean, I understand that the next instruction after the branch is already ready for execution even if the branch is triggered. But I don't get that a few instruction gets executed. Also is it taking 1 cycle for each branch there? Something that just came to my mind is that if the rule is: 'for each branch, the next instruction is executed' then it makes perfect sense that it propagates. Anyway thanks for taking the time to answer it.–André EricsonNov 28, 2010 at 7:03Yes, one cycle per branch is how I would understand it, though it would probably also be perfectly legal for it to unconditionally jump tosomeand ignore the jump toain the first delay slot.–Justin Spahr-SummersNov 28, 2010 at 7:05Will wait for a little bit to see if someone has anything to add to it and will flag it as answered. Thank you again.–André EricsonNov 28, 2010 at 7:24Add a comment|
I was playing around with branch delay slots. Tried that on spim.j some j a j b j c j d ori $9, $0, 13 some: a: b: c: d:For my surprise it changed the $9 to 13. So my question is can a delay slot propagate or this is a spim thing and doesn't happen on real mips32 processors? If this is the expected behavior can someone give me a little enlightenment on what's happening there?
Does mips branch delay slots propagates through successive branches?
In order to make your pipeline successful, you need to implementDO-IF-SKIP-ELSE. That is you can add a dummyWait activityon skipping of true block activity.Check thepipeline image:Adding dummy activity when true activities are skipped will make your pipeline success.ShareFollowansweredNov 3, 2023 at 15:04AswinAswin5,29422 gold badges66 silver badges1919 bronze badgesRecognized by Microsoft Azure Collective2Thanks Man! Just one small confirmation, If Set Variable 1 fails and then False Activity also Fails, then on skip Dummy Wait succeeds, this will definitely lead to overall pipeline failure right?–Prem__RajNov 5, 2023 at 16:28Yes.Pipeline is a success if and only if all leaves succeed.–AswinNov 6, 2023 at 3:11Add a comment|
I have a pipeline which i'm working on, i'm doing a lookup to check if few files are there, if the folder path is not there i have to perform certain processes, when the folder path is not present, the lookup activity fails and the red-line flow triggers and does few post-processes.When those post-processes succeed, the overall pipeline fails, the error message says the lookup failed so overall pipeline also failed, I want my overall pipeline to also show succeed.https://learn.microsoft.com/en-us/azure/data-factory/tutorial-pipeline-failure-error-handlingThis documentation says it'll succeed but it still fails.
ADF Pipeline to Succeed even if activity fails
I'd suggest using themagrittr%T>%"tee" pipe for the pass-through, with an anonymous function expression:library(magrittr) mtcars %>% filter(mpg > 30) %T>% {\(x) (nrow(mtcars) - nrow(x)) %>% print}() # [1] 28 # mpg cyl disp hp drat wt qsec vs am gear carb # Fiat 128 32.4 4 78.7 66 4.08 2.200 19.47 1 1 4 1 # Honda Civic 30.4 4 75.7 52 4.93 1.615 18.52 1 1 4 2 # Toyota Corolla 33.9 4 71.1 65 4.22 1.835 19.90 1 1 4 1 # Lotus Europa 30.4 4 95.1 113 3.77 1.513 16.90 1 1 5 2ShareFolloweditedDec 9, 2022 at 3:18answeredDec 9, 2022 at 3:11Gregor ThomasGregor Thomas142k2020 gold badges177177 silver badges300300 bronze badges2Thank you for your reply. Would you mind to answer one more question? Are there any ways to check the difference between 'n'th pipeline and 'n+1'th pipeline?–doraemonDec 9, 2022 at 3:192I think you might want to check out thetidylog package. You could maybe work out some function that wraps each step of the pipeline, but that sounds really clunky. And I thinktidylogdoes pretty much what you're after.–Gregor ThomasDec 9, 2022 at 3:25Add a comment|
pass_through <- function(data, fun) {fun(data); data}#fromPrinting intermediate results without breaking pipeline in tidyverseanswermtcars %>% filter(mpg>15) %>% pass_through(. %>% nrow %>% print)From the code above, I can print the number of rows of the data after filtering. But I cannot print the difference of number of rows between the original data and the data after filtering.> mtcars %>% filter(mpg>15) %>% pass_through(. %>% nrow %>% print(.-nrow(mtcars))) Error in print.default(., . - nrow(mtcars)) : invalid printing digits -6Question 1: Are there any ways to check the difference without using any extra variables and breaking pipeline?Question 2: Are there any ways to check the difference between 'n'th pipeline and 'n+1'th pipeline without using any extra variables and breaking pipeline?For example, by using the code from Gregor Thomas,mtcars %>% filter(mpg > 30) %T>% #let this output to be y {\(x) (nrow(mtcars) - nrow(x)) %>% print}() %>% filter(cyl > 5) %T>% {\(x) (nrow(y) - nrow(x)) %>% print}() #I know it is illegal to write 'y'
How to compare the number of rows in a pipeline in r?
As you'vefound in the docs, how frequently a schedule is run is partially determined by the cron.By default, the cron is run on the 19th minute of every hour, which means, it won't run more than once an hour unless the default has been changed.Thegitlab.rbfile that needs to be edited is part of the GitLab installation (as the other poster mentioned), so you would either need tofollow the configuration instructionsto edit the file, or talk to whomever is the GitLab administrator for the instance you're using to change it.For reference, theGitLab.com settingspage lists the cron frequency for the SaaS version of GitLab.ShareFollowansweredJan 2, 2021 at 7:10Arty-chanArty-chan2,74522 gold badges2121 silver badges2727 bronze badgesAdd a comment|
I created a .gitlab-ci.yml file in my Git project and I would like it to run every 5 minutes.I created a new schedule in gitlab.com ( CI/ CD -> schedules -> new schedule) and used custom Interval Pattern with the pattern - * /5 * * * *But this is not working, I saw that the pipeline run every hour and not every 5 minutes as I expected.I used the pipeline schedules documentation and I saw that maybe the reason is that schedules are handled by Sidekiq and I need to edit gitlab.rb file -I can't find what is this "gitlab.rb file", I tried to created it manually in my project, and tried to put this file in my project under etc/gitlab/, and this not working for me.Please help me with this issue, if you know what I should do about the gitlab.rb file or do you have another idea of how to run ci/cd every 5 minutes.Thank you!
Gitlab CI\CD schedule pipline Interval
Because2/10 %>% ceiling()works as2/(10 %>% ceiling()), i.e.%>%has precedence over/.Put differently,2/10 %>% ceiling()=2/10=0.2ShareFollowansweredFeb 2, 2020 at 8:45keepAlivekeepAlive6,49055 gold badges2727 silver badges4141 bronze badgesAdd a comment|
When I try to useceiling()function, it works okay, but when I try to divide something and give it to the ceiling function using pipeline operator(2/10 %>% ceiling()), I get a problem.ceiling(0.2) 1 ceiling(2/10) 1 2/10 0.2 2/10 %>% ceiling() 0.2 2 %>% `/`(10) 0.2 2 %>% `/`(10) %>% ceiling() 1 0.2 %>% ceiling() 1
Why combining pipeline operator with ceiling function doesn't work in R?
Bag of words is whatCountVectorizerdoes – building vector with word counts for each sentence.TfIdftakes the BoW and transform that matrix to, well,tf-idf– frequency in sentence + inverted document frequency.This part of pipeline can be substituted withTfidfVectorizer– its actually BoW + TfIdf. Later is rarely used without BoW, so combined version makes sense if classifier is all you need at the end of the dayShareFolloweditedFeb 18, 2019 at 20:36answeredFeb 18, 2019 at 20:30SlamSlam8,33211 gold badge3737 silver badges4545 bronze badges1Thank you for your explanation! Makes a lot more sense now.–patriFeb 18, 2019 at 20:40Add a comment|
Recently I started reading more about NLP and following tutorials in Python in order to learn more about the subject. While following one of the tutorials I observed that they were using the sparse matrix of word counts in each tweet (created with CountVectorizer) as input to TfidfTransformer which handles the data and feeds it to the classifier for training and prediction.pipeline = Pipeline([ ('vect', CountVectorizer()), ('tfidf', TfidfTransformer()), ('clf', LogisticRegression()) ])As no explanation was provided, I can't understand the thought process behind this... Isn't it just a regular Bag of Words? Can't this be done by using just one of the functions, for example, just Tfidf?Any clarification would be greatly appreciated.
CountVectorizer output that serves as TfidfTransformer input vs. TfidfTransformer()
Same here, Issuing the command to install dpl with verbosity:gem install dpl --verboseI've been able to see something weird:/usr/local/bundle/bin/dpl Successfully installed dpl-1.9.6 1 gem installedI don't know why but it is installed in a non-default path. As a workaround I've added the/usr/local/bundle/binin$PATHenvironment variable issuing the following command:export PATH=$PATH:/usr/local/bundle/binIt works for me and my gitlab ci pipelines are now working again.BTW, It would be great to know why it has changed suddenly...ShareFollowansweredMay 24, 2018 at 10:44Adrian AntunezAdrian Antunez1,00988 silver badges1212 bronze badgesAdd a comment|
The pipeline.gitlab-ci.ymlcode successfully works till yesterday, but today i got the error which says “dpl command not found”the below is my.gitlab-ci.ymlfileimage: node:8.9.3 stages: - job1 - test - production job1: stage: job1 script: "ls -l" test: stage: test script: - npm install production: type: deploy stage: production image: ruby:latest script: - apt-get update -qy - apt-get install -y ruby-dev - gem install dpl - dpl --provider=heroku --app=quailapp --api-key=$HEROKU_PRODUCTION_API_KEY only: - masterThis is the log Generated,Setting up rake (10.5.0-2) ... Setting up libruby2.3:amd64 (2.3.3-1+deb9u2) ... Setting up ruby2.3 (2.3.3-1+deb9u2) ... Setting up ruby2.3-dev:amd64 (2.3.3-1+deb9u2) ... Setting up ruby-dev:amd64 (1:2.3.3) ... Setting up ruby (1:2.3.3) ... Processing triggers for libc-bin (2.24-11+deb9u3) ... $ gem install dpl Successfully installed dpl-1.9.6 1 gem installed $ dpl --provider=heroku --app=quailapp --api-key=$HEROKU_PRODUCTION_API_KEY /bin/bash: line 68: dpl: command not found ERROR: Job failed: exit code 1please help me for finding the solution.
CI/CD Gitlab deployment Failed - dbl command not found
Ok I'm pretty sure I figured out the answer which is you take the longest duration of the stages which in this case is 350ps and you multiply it by the amount of stages, in this case 5.So350 * 5 = 1750psShareFollowansweredNov 14, 2017 at 2:22Coder117Coder11784122 gold badges99 silver badges2323 bronze badgesAdd a comment|
I'm given stages of a clock cycle in a processor.IF ID EX MEM WB 250ps 350ps 150ps 300ps 200psNow I'm being asked what is the total latency of a LW instruction in a pipelined instruction.Here's what I know:The clock cycle time in a pipelined version is 350ps because that's the longest instruction.The clock cycle time in a non-pipelined version is 1250ps because that's the duration of all the instructions added together.But how does the "latency of a LW instruction" relate to those times?
MIPS lw latency in pipelining
grid.best_estimator_is to access the pipeline with the best parameters.Now usenamed_steps[]attributeto access the internal estimators of the pipeline.Sogrid.best_estimator_.named_steps['reduce_dim']will give you thepcaobject. Now you can simply use this to access thecomponents_andexplained_variance_attibutes for this pca object like this:grid.best_estimator_.named_steps['reduce_dim'].components_grid.best_estimator_.named_steps['reduce_dim'].explained_variance_ShareFollowansweredOct 18, 2017 at 1:47Vivek KumarVivek Kumar35.8k99 gold badges114114 silver badges136136 bronze badges1This is perfect. Thanks a bunch!–shikhanshuOct 18, 2017 at 2:23Add a comment|
I am using GridSearchCV with a pipeline as follows:grid = GridSearchCV( Pipeline([ ('reduce_dim', PCA()), ('classify', RandomForestClassifier(n_jobs = -1)) ]), param_grid=[ { 'reduce_dim__n_components': range(0.7,0.9,0.1), 'classify__n_estimators': range(10,50,5), 'classify__max_features': ['auto', 0.2], 'classify__min_samples_leaf': [40,50,60], 'classify__criterion': ['gini', 'entropy'] } ], cv=5, scoring='f1') grid.fit(X,y)How do I now retrieve PCA details likecomponentsandexplained_variancefrom thegrid.best_estimator_model?Furthermore, I also want to save thebest_estimator_to a file using pickle and later load it. How do I retrieve the PCA details from this loaded estimator? I suspect it will be the same as above.
sklearn - How to retrieve PCA components and explained variance from inside a Pipeline passed to GridSearchCV
I think one possible solution is to create your own image pipeline inherited fromscrapy.pipelines.images.ImagesPipelinewith overridden methodget_media_requests(seedocumentationfor example). While yielding thescrapy.Request, passdont_filter=Trueto the constructor.ShareFollowansweredJul 19, 2017 at 5:40Tomáš LinhartTomáš Linhart9,99011 gold badge2929 silver badges4141 bronze badges3Thanks, I tried this and it didn't seem to work. I suspect there may be some 'duplicate detection code' within the image pipeline source itself - but from reviewing the code, I can't seem to find it anywhere. If I could locate that, I could maybe update it so that I can pass in pass in a custom arg to skip this.–Exam OrphJul 19, 2017 at 11:361I think I've found it, take a look at method_process_requestof classMediaPipeline(link). It takes file from cache if it was already downloaded (based on request fingerprint). Unfortunately, there doesn't seem to be a way how to customize it with any argument or setting.–Tomáš LinhartJul 19, 2017 at 12:24Thank you! I will try comment out/adjust some of the source and see if I can find a way to make it work for my application.–Exam OrphJul 19, 2017 at 12:27Add a comment|
Please see below an example version of my code, which uses the Scrapy Image Pipeline to download/scrape images from a site:import scrapy from scrapy_splash import SplashRequest from imageExtract.items import ImageextractItem class ExtractSpider(scrapy.Spider): name = 'extract' start_urls = ['url'] def parse(self, response): image = ImageextractItem() titles = ['a', 'b', 'c', 'd', 'e', 'f'] rel = ['url1', 'url2', 'url3', 'url4', 'url5', 'url6'] image['title'] = titles image['image_urls'] = rel return imageIt all works fine but as per default settings, avoids downloading duplicates. Is there any way of overriding this so that I can download the duplicates also? Thanks.
Allow duplicate downloads with Scrapy Image Pipeline?
The current version of imbalanced-learn comes with it's ownpipeline. You should be able to incorporate it in your sklearn pipeline. You just need to add this line after your sklearn imports, make sure it overides the sklearn version of pipeline if previously imported, and just use it like how you would use a sklearn pipeline.from imblearn.pipeline import PipelineShareFolloweditedOct 8, 2019 at 9:33answeredOct 1, 2019 at 9:28Alex RamsesAlex Ramses57855 silver badges1919 bronze badgesAdd a comment|
I have a multi-label classification problem with a huge class imbalance problem as such I would like to create a pipeline step with SMOTE but as the X is basically text and the Y is an array of 1s and 0s for said label, I can't just plug in SMOTE() this way as it needs both a fit and transform.pipeline = Pipeline([ ('smote', SMOTE()), ('vect', CountVectorizer()), ('tfidf', TfidfTransformer()), ('ss', StandardScaler(with_mean=False)), ('clf', model), ])
How would I create a SMOTE pipeline step for text classification, when the resampling method does not work with text?
You can install the library usingBrewbrew install freeimageShareFolloweditedJun 9, 2017 at 8:32peter.bartos12k33 gold badges5252 silver badges6262 bronze badgesansweredJun 9, 2017 at 8:28Juraj PauloJuraj Paulo34322 silver badges77 bronze badges1I fixed it earlier by installing the stand alone version.–PreprocezzorJun 9, 2017 at 10:57Add a comment|
I started a new project and I use Monogame (Pipeline) and the Xamarin Studio on my Mac. I installed Mono, Xamarin Studio and the latest version of Monogame (including Pipeline) for Mac. I've created a new Monogame project via Xamarin and everything worked fine.Now I want to add a picture to my project via Pipeline. I added it to the project and pressed "Build". Sadly I get an Error.The Error Message looks like this:Importer 'TextureImporter' had unexpected failure! System.DllNotFoundException: libfreeimage.dylibLooks like libfreeimage is missing but I wasn't able to find a solution for this yet.It works perfectly on my Windows.ThanksEDIT:Fixed by installing the stand alone version for Mac.
Monogame Pipeline Error on Mac: System.DllNotFoundException: libfreeimage.dylib
Use backticks to select columns with their names being numberdata(ChickWeight) library(dplyr) library(tidyr) chick <- ChickWeight %>% spread(Time,weight) %>% filter(Diet==2) %>% select(`0`)ShareFolloweditedFeb 2, 2017 at 16:11answeredFeb 2, 2017 at 15:21v.grabovetsv.grabovets63799 silver badges1010 bronze badges1Your answer essentially was correct. It probably was downvoted - I corrected this - because you did not test your answer (there is no0), and the recommended way is to make the question/answer self-contained.–Dieter MenneFeb 2, 2017 at 16:11Add a comment|
This question already has answers here:Select multiple columns with dplyr::select() with numbers as names(2 answers)Closed7 years ago.I want to reshape the data and then select a specific column.data(ChickWeight) chick <- ChickWeight %>% spread(Time,weight) %>% filter(Diet=="1")It creates the column names for me, which are numbers. So how could I select the column that named "0"? I know that%>% select(3)may work, but I need the solution to select columns with their names being number.
dplyr select column when column name is number [duplicate]
scikit-learn transformers can't change number of samples, this is not supported in API - seehttp://scikit-learn.org/stable/modules/generated/sklearn.base.TransformerMixin.html#sklearn.base.TransformerMixin.fit_transform- note dimensions of X, y and X_new. Also, note that they return only X, not y - it means if you change X dimension it won't longer match y dimension.One way to do it is to run it outside the pipeline - generate new samples for training and put them to pipeline, and don't generate new samples for testing. But it won't work e.g. with cross-vaidation.To make it work for cross-validation and model selection you'll need a custom Pipeline class which supports transformers which change n_samples. For example, an implementation can be found inimbalanced-learnpackage: seehere. Check this package - if you need upsampling then maybe your upsampling method is already implemented in imbalanced-learn.ShareFollowansweredJan 26, 2017 at 14:42Mikhail KorobovMikhail Korobov22.1k88 gold badges7373 silver badges6565 bronze badgesAdd a comment|
I'm using the Scikit learn pipeline object because I have a sequence of tasks to perform (upsampling, feature selection, classification). My upsampling method is a custom one, that means I have to implement a custom transformer for the pipeline.A transformer must have a transform and fit method. Of course I only want to upsample the training data but not the test data. Does this mean that I only have to implement the fit method but not the transform method (upsampling the dataset passed to the fit method)? As I understand, the transform method is applied to both the training and test set...
Custom transformer for Scikit Learn Pipeline
A simple solution would be to store 1/4th of the image in 4 separate memories. First memory contain every 4th line, second every 4th line, starting from second line, etc. I would use 4 even if you need 3 lines, since 4 evenly divides 480 and every other standard resolution. Also, finding a binary number modulo 4 is trivial, which is needed to order the memories.You can use the MSB of the line number to address your RAM, and the LSBs to figure out the relative order of each RAM output (code is only to demonstrate idea, it's not usable as is...):address <= line(line'left downto 2) & col; -- Or something more efficent on packing data0 <= ram0(address); data1 <= ram1(address); data2 <= ram2(address); data3 <= ram3(address); case line(1 downto 0) is when "00" => line0 <= data0; line1 <= data1; line2 <= data2; when "01" => line0 <= data1; line1 <= data2; line2 <= data3; when "10" => line0 <= data2; line1 <= data3; line2 <= data0; when "11" => line0 <= data3; line1 <= data0; line2 <= data1; when others => null; end case;ShareFollowansweredApr 8, 2015 at 4:03Jonathan DroletJonathan Drolet3,32811 gold badge1414 silver badges2323 bronze badgesAdd a comment|
I am currently trying to develop a Sobel filter in VHDL. I am using a 640x480 picture that is stored in a BRAM. The algorithm uses a 3x3 matrix of pixels of the image for processing each output pixel. My problem is that I currently only know of putting an image into a BRAM where each address of the BRAM holds one pixel value. This means I can only read one pixel per clock. My problem is that I am trying to pipeline the data so I would ideally need to be able to get three pixel values (one from each row of the picture) per clock so after my initial latency, I can load in three new pixel values per clock and get an output pixel on every clock. I am looking for a way to do this but cannot figure it out.The only way I can think of to fix this is to have the image in 3 BRAMs. that way I can read in values from 3 rows per each clock cycle. However, there is not enough memory space to fit even one more RAM large enough to fit a 640x480 image let alone three. I could lower the picture size to do it this way, but I really want to do it with my current 640x480 image size.Any help or guidance would be greatly appreciated.
Image Processing Pipelining in VHDL
You can do as follows:curl http://aa.com/a.jpg | convert - 00001.jpeg curl http://bb.com/b.jpg | convert - 00002.jpegThe file are then in JPEG format:user@host:~# file 00001.jpeg file.jpeg: JPEG image data, JFIF standard 1.01 user@host:~# file 00002.jpeg file.jpeg: JPEG image data, JFIF standard 1.01That's it.ShareFollowansweredJan 24, 2014 at 8:16chaoschaos8,51033 gold badges3535 silver badges4141 bronze badges24Another alternative: use ImageMagick directly since it supportsHTTP. Soconvert http://example.com/a.png 00001.jpegdoes the trick.–deltheilJan 24, 2014 at 10:10@deltheil, really true! It's more simple. Thanks.–miningJan 24, 2014 at 10:11Add a comment|
I have some image urls, I want to download them. But these files have different suffix, such as.jpg,.pngor.bmp. I also want to change them into a unified format, such as.JPEG. So I want to use thecurlcommand to download the image into the memory cache, then use theconvertcommand inImageMagickpackage to convert the data format into.JPEGformat. Is there a method to do this work?`curl http://aa.com/a.jpg` `convert a.jpg 00001.JPEG` `rm a.jpg` `curl http://bb.com/b.png` `convert b.png 00002.JPEG` `rm b.png`I want to simplify this procedure, let the temp files save into cache, then not save into the disk directly, so lessen the burden of the disk. Is there any way to usepipelinetechnology to do this work? such as`curl http://aa.com/a.jpg | convert ... | ...`Thanks in advance.
Could curl download an image in remote url into cache, then use pipeline to convert it?
Pipe your output through perl:echo -e 'aa\nbb' | perl -ne 'print $., ",", $_'Output:1,aa 2,bbShareFollowansweredNov 18, 2013 at 16:54Ciro Santilli OurBigBook.comCiro Santilli OurBigBook.com362k111111 gold badges1.2k1.2k silver badges1k1k bronze badges4+1 or awk:| awk '{print NR "," $0}'. You could also writeperl -pe 's/^/$.,/'–glenn jackmanNov 18, 2013 at 16:582OP is looping through a command and its output is only a single line for each iteration. Hence, I think this approach is not valid for this case.–jkshahNov 18, 2013 at 17:05@jkshah answered before the comment clarifications. He could still pipe his entire script output through perl, but agree that it is not optimal in this case.–Ciro Santilli OurBigBook.comNov 18, 2013 at 17:151Just pipe the loop into perl/awk:for host in $list; do ...; done | perl -pe 's/^/$.,/'–glenn jackmanNov 18, 2013 at 17:46Add a comment|
I have a script that prints out the average time when pinging a server, shown below:ping -c3 "${I}" | tail -1 | awk '{print $4}' | cut -d '/' -f 2 | sed 's/$/\tms/'How can I add the line number to output of the script above when pinging a list of servers ??my actual output when pinging list of 3 host is:6.924 ms 100.099 ms 7.756 msI want the output to be like this:1,6.924 ms 2,100.099 ms 3,7,756 msso that this can be read by excel :) Thank in advanced!!
print line number of output in shell script
Take a look atTee-Object. From help:The Tee-Object cmdlet sends the output of a command in two directions (like the letter "T"). It stores the output in a file or variable and also sends it down the pipeline. If Tee-Object is the last command in the pipeline, the command output is displayed in the console.ShareFollowansweredJul 9, 2013 at 9:39vonPryzvonPryz23.5k88 gold badges5555 silver badges6666 bronze badges1Yes, it seems we can use .\Test.ps1 | Tee-Object -FilePath test.log–victorwooJul 9, 2013 at 13:37Add a comment|
Is there a usage of pipeline for PowerShell to Write-Output & write to file in the same time, without using a custom wrapping function?
Is it possible for PowerShell to write message to multiple target
You should useValueFromPipelineofParameterAttributeclass:[Cmdlet(VerbsCommon.Find, "Numbers")] public class FindNumbers: Cmdlet { [Parameter(ValueFromPipeline = true)] // The data appear in this variable public int[] Input { get; set; } protected override void ProcessRecord() { foreach (var variable in Input) { if (variable % 2 == 0) { WriteObject(variable); } } } }ShareFollowansweredJan 9, 2013 at 16:41VitaliyVitaliy70266 silver badges1919 bronze badges2This might also help :stackoverflow.com/questions/885349/…–ZaszJan 9, 2013 at 16:444@Zasz: I do not see how your link is related to writing these cmdlets in C#?–mousioJan 10, 2013 at 21:48Add a comment|
How to consume data from pipeline when writing cmdlets in C#?For example I have two classes:This one produces data:[Cmdlet(VerbsCommon.Get, "Numbers")] public class GetNumbers : Cmdlet { protected override void ProcessRecord() { WriteObject(new[] {1, 2, 3, 4, 5}, true); } }And this one must consume this data:[Cmdlet(VerbsCommon.Find, "Numbers")] public class FindNumbers: Cmdlet { protected override void ProcessRecord() { foreach (var variable in %Input%) // Where do I get input? Any ReadRecord or something else? { if (variable % 2 == 0) { WriteObject(variable); } } } }In this way:Get-Numbers | Find-Numbers
Consume data from pipeline
the order as you addedChannelPipeline p = pipeline(); p.addLast("1", new InboundHandlerA()); p.addLast("2", new InboundHandlerB()); p.addLast("3", new OutboundHandlerA()); p.addLast("4", new OutboundHandlerB()); p.addLast("5", new InboundOutboundHandlerX());In the given example configuration, the handler evaluation order is 1, 2, 3, 4, 5 when an event goes inbound. When an event goes outbound, the order is 5, 4, 3, 2, 1. On top of this principle, ChannelPipeline skips the evaluation of certain handlers to shorten the stack depthyou can view this website for more detailhttps://netty.io/4.0/api/io/netty/channel/ChannelPipeline.htmlShareFolloweditedJun 19, 2017 at 14:21answeredMar 1, 2017 at 8:47Year FuYear Fu3122 bronze badges1Many years later, but thanks for this very clear explanation. The subject is particularly confusing since theASCII art in the javadocstalks about "inbound" being "bottom up", whereas as you've shown here coding-wise it's "top down" if you look at it in terms of code flow, at least as one typically writes it usingaddLast().–Laird NelsonSep 28, 2019 at 5:26Add a comment|
I am learning netty and there is a following code from exampleChannelPipeline pipeline = pipeline(); // Enable stream compression (you can remove these two if unnecessary) pipeline.addLast("deflater", new ZlibEncoder(ZlibWrapper.GZIP)); pipeline.addLast("inflater", new ZlibDecoder(ZlibWrapper.GZIP)); // Add the number codec first, pipeline.addLast("decoder", new BigIntegerDecoder()); pipeline.addLast("encoder", new NumberEncoder()); // and then business logic. // Please note we create a handler for every new channel // because it has stateful properties. pipeline.addLast("handler", new FactorialServerHandler());My question is where can I see the list of valid 1st parameters for addLast method, like deflater, inflater, decoder, encoder, handler and so on.And I cannot find the place in source code where mapping is implemented. Here I mean message arrive and ChannelPipeline checks that deflater is set and calls ZlibEncoder.GZIP method.
netty ChannelPipeline addLast
This line of code:CommandParameter testParam = new CommandParameter("test3");Creates a parameter namedtest3that has a value of null. I suspect you want to create a named parameter e.g.:CommandParameter testParam = new CommandParameter("username", "test3");And you're script needs to be configured to accept parameters e.g.:--- Contents of 'new group.ps1 --- param([string]$Username) ...ShareFollowansweredAug 7, 2012 at 16:07Keith HillKeith Hill198k4343 gold badges355355 silver badges372372 bronze badgesAdd a comment|
I'am tring to call a powershell script and pass through a parameter. I would all so like to pass through a more than one parameter eventuallyRunspaceConfiguration runspaceConfiguration = RunspaceConfiguration.Create(); Runspace runspace = RunspaceFactory.CreateRunspace(runspaceConfiguration); runspace.Open(); RunspaceInvoke scriptInvoker = new RunspaceInvoke(runspace); Pipeline pipeline = runspace.CreatePipeline(); String scriptfile = "..\\..\\Resources\\new group.ps1"; Command myCommand = new Command(scriptfile, false); CommandParameter testParam = new CommandParameter("test3"); myCommand.Parameters.Add(testParam); pipeline.Commands.Add(myCommand); Collection<PSObject> psObjects; psObjects = pipeline.Invoke(); runspace.Close();The problem seems to be ... well nothing happens. Is this how to correctly assign the varibles? Giving test powershell script# creates group net localgroup $username /Add # makes folder #mkdir $path
Assign variables to powershell using C#
In a functional language, everything is dataflow. You can use functions as your module concept.To address each of your use-cases:Apluggagble moduleis a Clojure function that takes a single argument that is the state of your data vector. e.g.(def module-a some-function)To allow for easy extension by modules, I suggest using a Clojure map as your state, where one field is your array of floats.Composing modulesis function composition. e.g.(def combined-module (compose module-a module-b)Auxiliary functionsare accessor functions, extracting state from your data. e.g. If your data is a Clojure map with a:moving-averagefield, then the keyword:moving-averageis your accessor function. State is not stored in modules.Boilerplate codeis hidden in the implementation of your functions, which can be declared anywhere, possibly in another file and namespace.ShareFollowansweredOct 19, 2010 at 21:24Jonathan TranJonathan Tran15.2k99 gold badges6262 silver badges6767 bronze badgesAdd a comment|
I'm developing some simulation software in Clojure that will need to process lots of vector data (basically originating as offsets into arrays of Java floats, length typically in 10-10000 range). Large numbers of these vectors will need to go through various processing steps - e.g. normalising the vectors, concatenating together two streams of vectors, calculating a moving average etc.Rather than doing everything in an imperative style, I was hoping to do was create a more functional-style Clojure solution that would do the following:allow any vector function to be turned into apluggable module, e.g. (def module-a (make-module some-function))allow these modules to becomposed in pipelines, e.g. (def combined-module (combine-in-series module-a module-b)) would feed the output of module-a into the input of module-ballowauxillary functionsto access state stored within a given module, e.g. (get-moving-average some-moving-average-module), which would need to work even if some-moving-average-module is embedded deep within a combined pipelinehide any boilerplate codebehind the scenes, e.g. allocating sufficiently large temporary arrays for vector calculation.Does this sound like a sensible approach?If so, any implementation hints or libraries that might help?
Pluggable vector processing units in Clojure
Do you mean the integrated pipeline mode? If so then you're looking for:HttpRuntime.UsingIntegratedPipeline.if(HttpRuntime.UsingIntegratedPipeline) { //Yep we're using it }ShareFollowansweredMay 30, 2010 at 12:24Nick CraverNick Craver626k136136 gold badges1.3k1.3k silver badges1.2k1.2k bronze badgesAdd a comment|
Is it possible to determine the managed pipeline IIS7 is running under in ASP.NET?
Is it possible to determine the managed pipeline IIS7 is running under in ASP.NET?
Pipes in Base support only functions with one argument. When combining the pipe with an anonymous function it will work, where(df.A .> 0)needs to be in(and)using DataFrames df = DataFrame(A=[-1,missing,1],B=[10,20,30]) cond = (df.A .> 0) |> x -> replace(x, missing => false) cond #3-element Vector{Bool}: # 0 # 0 # 1 # Or using coalesce. cond = (df.A .> 0) |> x -> coalesce.(x, false)Another possibility will be to use the packagePipe.jl, where@pipeneeds to be used and the placeholder is_.using Pipe: @pipe cond = @pipe (df.A .> 0) |> replace(_, missing => false)BesidePipe.jlthis works also withHose.jlusing Hose cond = @hose (df.A .> 0) |> replace(_, missing => false) #or cond = @hose (df.A .> 0) |> replace(missing => false)or withPlumber.jlusing Plumber cond = @pipe (df.A .> 0) |> replace(_, missing => false) #or @pipe cond = (df.A .> 0) |> replace(_, missing => false)or withChain.jlusing Chain cond = @chain df.A .> 0 replace(_, missing => false) #or cond = @chain df.A .> 0 replace(missing => false)or withLazy.jlusing Lazy cond = @> df.A .> 0 replace(missing => false) #or cond = @as x df.A .> 0 replace(x, missing => false)ShareFolloweditedJul 11, 2023 at 9:36answeredJul 10, 2023 at 18:40GKiGKi38.2k33 gold badges2929 silver badges5151 bronze badgesAdd a comment|
Consider the following:df = DataFrame(A=[-1,missing,1],B=[10,20,30]) 3×2 DataFrame Row │ A B │ Int64? Int64 ─────┼──────────────── 1 │ -1 10 2 │ missing 20 3 │ 1 30 cond = df.A .> 0. #1 cond = replace(cond, missing => false) #2 df[cond,:]Is it possible to combine#1and#2into one line using|>likecond = df.A .> 0 |> replace(missing => false)which fails.
【Julia: DataFrame】 How to combine two lines into one using `|>`
Do we have to call example_pipe.fit() before using cross_val_score()?If you go toScikit-Learn Documentation, you find the answer:cross_val_scorefirst fits yourexample_pipe, then gets the score of the cross validation.ShareFollowansweredAug 7, 2021 at 18:17YahyaYahya13.8k66 gold badges3030 silver badges4343 bronze badgesAdd a comment|
Imagine we have the following pipeline:example_pipe = Pipeline(steps=[ ('scaler', StandardScaler()), ('selector', SelectKBest(k=len(X.columns)-5)), ('classifier', KNeighborsClassifier()) ])Now we want to get the performance of the pipeline with:# 1) cross_val_score(example_pipe, X, y, cv=5, scoring='accuracy').mean() # 2) example_pipe.fit(X_train, y_train) example_pipe.score(X_test, y_test)How is the first different from the second in regards to the score we get (except of course that it does cross-validation)? Do we have to callexample_pipe.fit()before usingcross_val_score().I've found the following methods in the documentation, but it's a bit confusing because I thought that calling.fit()already implies calling.transform().fit(X[, y]) --> Fit the modelfit_predict(X[, y]) --> Applies fit_predict of last step in pipeline after transforms.fit_transform(X[, y]) --> Fit the model and transform with the final estimatorscore(X[, y, sample_weight]) --> Apply transforms, and score with the final estimator
Scikit Learn Pipeline: Calling .fit() and .score() vs cross_val_score()
This should give you expected result:trigger: - master pool: vmImage: ubuntu-latest steps: - pwsh: | Write-Host "Setting up the date time for build variable" $date=$(Get-Date -format yyyyMMdd-Hmmss) Write-Host "##vso[task.setvariable variable=currentTimeStamp]$date" displayName: 'Geting timestamp' - task: PublishPipelineArtifact@1 inputs: targetPath: '$(Pipeline.Workspace)' artifact: 'ArtifactName-$(currentTimeStamp)' publishLocation: 'pipeline'ShareFolloweditedJul 2, 2021 at 17:40answeredJul 2, 2021 at 14:51Krzysztof MadejKrzysztof Madej36.5k1010 gold badges9090 silver badges119119 bronze badges3Thanks for the response. May I know in the second option, why is the variable declared as mydate and while concatenation it is currentTimeStamp–ArunJul 2, 2021 at 16:03My mistake ;) it should becurrentTimeStamp.–Krzysztof MadejJul 2, 2021 at 16:34'ArtifactName' + $(currentTimeStamp) gives an error 'While parsing a block maping, did not find expected key.'–ArunJul 2, 2021 at 17:00Add a comment|
Setting up a yml based azure build pipeline for which publishing artifacts require timestamp based name. Trying to do something like this'ArtifactName' + currentTimeStampHow can this be done in an yml file?
Get current Timestamp and concatenate with string in yml file
The output declaration should be:output: tuple id, input_for_a, input_for_b, input_for_a_b into(downstream_a, downstream_b)In alternative, you may want to consider using DLS2 which does not have anymore the single-channel usage requirement. Read more about ithere.ShareFollowansweredJul 7, 2020 at 9:19pditommasopditommaso3,32166 gold badges2828 silver badges4444 bronze badgesAdd a comment|
I'm implementing a Nextflow workflow where each process can give multiple outputs that may be needed downstream on different processes.process multiple_outputs { input: tuple id, input from previous_process output: tuple id, input_for_a, input_for_b, input_for_a_b into downstream }Nextflow documentation states thatoperator into can use multiple channelsandpromotes channel duplication as a pattern.However none of these options seems to work insidethe process primitiveandmultiple channel output is not documented:The patternfound on this answer(syntax error):output: tuple id, input_for_a, input_for_b, input_for_a_b into { downstream_a; downstream_b } // nor these variants: // // `into { downstream_a, downstream_b }` // `into downstream_a, downstream_b` // `into tuple a, b`Repeating the output into duplicated channels (one channel is empty):output: tuple id, input_for_a, input_for_a_b into downstream_a tuple id, input_for_b, input_for_a_b into downstream_b // runs, but cannot find the file in one of the channelsWhich is the right way to use the output in multiple channels?
How do I send a process output to multiple channels in Nextflow?
I don't have the specific answer to your question but if you are working on Net core projects what worked for me is setting the project to generate the nuget on build task. Look forGeneratePackageOnBuildin the csproj. This will create the nuget each time is built with the properties on your csproj.If you don't setAssemblyNamethe package name will be the same as your csproj filename..<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <TargetFramework>netstandard2.0</TargetFramework> <Company>ACME inc</Company> <Version>1.1.0</Version> <AssemblyName>YourPackageID</AssemblyName> </PropertyGroup> <PropertyGroup Condition=" '$(Configuration)' == 'Release' "> <GeneratePackageOnBuild>true</GeneratePackageOnBuild> </PropertyGroup> </Project>You won't need to pack anything, just push the nupkg in the build directory. The Nuget Push task just needs a regex like this -the path may contain multiple regex expresions:$(Build.SourcesDirectory)/**/**/*.nupkg;$(Build.SourcesDirectory)/**/**/**/*.nupkg;ShareFollowansweredMay 5, 2020 at 16:01GonzaGonza20322 silver badges44 bronze badges1I was skeptical that this would work but it did and the key was the directory. I was using "$(Build.ArtifactStagingDirectory)/**/*.nupkg" instead of "$(Build.SourcesDirectory)/**/**/*.nupkg"–BlakeNov 12, 2021 at 20:19Add a comment|
We have an existing Devops pipeline set up to use .Net Core tasks to package and push a .Net Standard class library as a Nuget artifact. We are trying to replace this with a pipeline that uses Nuget tasks--we are using Nuget Pack and Nuget Push. We haven't been able to get the package name created by the Nuget tasks to match the package name we had been using when packaging with the .Net Core tasks.The .Net Core tasks were pulling the package name from the PackageId specified in the .csproj file. The Nuget task appears to be pulling it from the Assembly name of the project. Is there a way to configure the pacakge name with Nuget tasks?new devops pipeline
How to configure Azure Devops Pipeline package name for Nuget artifact
Hereis a solution.pipeline { agent any triggers { GenericTrigger( genericVariables: [ [key: 'ref', value: '$.ref'] ], causeString: 'Triggered on $ref', token: 'abc123', printContributedVariables: true, printPostContent: true, silentResponse: false, regexpFilterText: '$ref', regexpFilterExpression: 'refs/heads/' + BRANCH_NAME ) } stages { stage('Some step') { steps { sh "echo $ref" } } } }It can be triggered with something like:curl -X POST -H "Content-Type: application/json" -H "headerWithNumber: nbr123" -H "headerWithString: a b c" -d '{ "before": "1848f12", "after": "5cab1", "ref": "refs/heads/develop" }' -vs http://admin:admin@localhost:8080/jenkins/generic-webhook-trigger/invoke?requestWithNumber=nbr%20123\&requestWithString=a%20stringShareFollowansweredOct 31, 2019 at 12:42gekgek54466 silver badges1818 bronze badgesAdd a comment|
There is possible to set a token in job properties in jenkins web interface, but i didn't find it in pipeline documentation. I talking about that one:
How to set a job token in declarative pipeline?
I don't think one can do that in Drone. As mentioned in theservice docs, detached steps are basically services. And also mentioned in the docs is:It is important to note the service container exit code is ignored, and a non-zero exit code does not fail the overall pipeline. Drone expects service containers to exit with a non-zero exit code, since they often need to be killed after the pipeline completes.So Drone doesn't care about services after they are started and as far as I know doesn't give us any options to stop them. But they will be automatically killed when all the steps are finished.If you are trying to run two types of test in a single build and require different resources for each, I'll suggest look intomultiple pipelinesand create separate pipelines for them instead of starting/stopping services.ShareFollowansweredAug 1, 2019 at 17:06RohitRohit19122 silver badges99 bronze badgesAdd a comment|
In drone you can detach steps as seen here:https://docs.drone.io/config/pipeline/steps/Example use case: I started a detached database. Some tests run against it. Then the db is no longer needed so I would like to terminate that detached step.
How to stop a detached step in drone.io?
Move-Itemdoes not output to the pipeline by default. Use the-PassThruswitch:-PassThruReturns an object representing the item with which you are working. By default, this cmdlet does not generate any output.That will pipe it directly intoRename-Itemand you have to specify-NewNameonly:Get-ChildItem -Path $folderpath -Filter $folderfile | Move-Item -Destination $destination -PassThru | Rename-Item -NewName $newname -PassThru | Out-File -FilePath $logpath -AppendAlso, you don't even have to useRename-Itemat all but move it directly to the final target directory + name (assuming$destinationis a directory path):Get-ChildItem -Path $folderpath -Filter $folderfile | Move-Item -Destination (Join-Path $destination $newname) -PassThru | Out-File -FilePath $logpath -AppendShareFolloweditedNov 15, 2018 at 15:09answeredNov 15, 2018 at 15:02marszemarsze15.9k55 gold badges5151 silver badges6464 bronze badgesAdd a comment|
In this script, everything is working as I expect for the most part. However, the rename operation will only work outside of these piped commandsGet-ChildItem -Path $folderpath -Filter $folderfile | Move-Item - Destination $destination | sleep 5 | Out-File -FilePath $logpath -AppendIf I try to do the rename as part of the piped commands, it simply doesn't work. Anywhere outside of that and it will work for a single iteratrion of the filewatcher, and then no more. Why will the rename not work as a piped command?Get-ChildItem -Path $folderpath -Filter $folderfile | Move-Item -Destination $destination | Rename-Item $destination$folderfile -NewName $newname | Out-File -FilePath $logpath -Append
Powershell: Why does Rename-Item not work as a piped command?
You could useconditional scriptsinside your process and use just one channel.process action { input: val t from trigger file in from input_channel output: file out into unique_receiver script: if (t == 1) """ foo ${in} > ${out} """ else if (t == 2) """ bar ${in} > ${out} """ }ShareFollowansweredJul 2, 2020 at 23:24xihhxihh17922 silver badges1313 bronze badgesAdd a comment|
I have trying to create a conditional pipeline NextFlow. For example,Process A outputs a value to a channel. If the value is 1, then run X, otherwise run Y.Here's what I am trying to do:initialData = 2 receiver1 = "EMPTY" receiver2 = "EMPTY" receiver3 = "" process A { input: val initialData output: val initialData into trigger ''' echo 10 ''' } process foo { input: val trigger output: val "I ran from FOO" into receiver2 when: trigger == 2 ''' echo I ran from FOO ''' } process bar { input: val trigger output: val "I ran from BAR" into receiver1 when: trigger == 1 ''' echo I ran from BAR ''' }Assume foo and bar are equivalent but different implementations (e.g. one converts a movie from AVI to h.264, and the other converts from MOV to h.264). I'd like to have another process, say C, that can read either from Bar or Foo without knowing anything about trigger. But, nextflow complains if I use the same output channel name in both Foo and Bar.
Conditional Pipeline in Nextflow
By default, theRename-ItemCmdLet doesn't return anything. You'll have to force it to in a pipe. Use thePassThruparameter when in the pipe and it should copy just fine.Get-ChildItem -path $AudioDir $LatestMP3 |Rename-Item -newname {(GET-DATE).ToString("yyyy-MM-dd") + " NewAUDIO.mp3"} -PassThru | Copy-Item -destination $MediaDirShareFollowansweredMar 16, 2017 at 5:37SomeShinyObjectSomeShinyObject7,64166 gold badges3939 silver badges5959 bronze badges2Thanks so much SomeShinyObject! That worked perfectly and I understand why too. I'm getting better at this by learning every day!–IanBMar 16, 2017 at 6:171Glad to help. If I helped answer your question, be sure to click the check so it shows the question was answered.–SomeShinyObjectMar 16, 2017 at 6:23Add a comment|
I am quite new to PowerShell.I have created a PowerShell script which identifies a specific Mp3 file out of a large number of very similar files in one folder based on certain criteria:Is the most recent file createdIt is an MP3 fileIt has a certain character set in the file name.The file is then renamed to today's date and adds some other text to the file name:$AudioDir = "\\Server\Audio\" $MediaDir = "\\Server2\Media\" $LatestMP3 = Get-ChildItem -Path $AudioDir "*NEW.MP3" | Sort-Object CreationTime | Select-Object -Last 1 Get-ChildItem -path $AudioDir $LatestMP3 |Rename-Item -newname {(GET-DATE).ToString("yyyy-MM-dd") + " NewAUDIO.mp3"}This part works perfectly but the next step does not. I want to copy that renamed file to another folder on another server ($MediaDir = "\\Server2\Media\")I am trying a pipe:Get-ChildItem -path $AudioDir $LatestMP3 |Rename-Item -newname {(GET-DATE).ToString("yyyy-MM-dd") + " NewAUDIO.mp3"} | Copy-Item -destination $MediaDirThere is no error, the file renames as expected but theCopy-Item -destination $MediaDirdoes nothing.Any help would be greatly appreciated.Thanks
Pipelined copy-item doesnt work after a file is renamed
CrossValidatorModelnot only contains the best model with the highest average cross-validation metric across folds - akabestModel- but also the metrics for each param map evaluated.To grab these, you can use thegetEstimatorParamMapsmethod in combination withavgMetrics, for example:val cvModel = cv.fit(training) cvModel.getEstimatorParamMaps.zip(cvModel.avgMetrics)ShareFollowansweredNov 11, 2016 at 12:41mtotomtoto24.1k44 gold badges5959 silver badges7272 bronze badges21Thank you very much - just what I have been looking for.–Georg HeilerNov 11, 2016 at 12:43but it is not possible to obtain metrics regarding variance /std-dev?–Georg HeilerNov 11, 2016 at 16:26Add a comment|
How can I obtain the result of the evaluator in a spark pipeline?val evaluator = new BinaryClassificationEvaluator() val cv = new CrossValidator() .setEstimator(pipeline) .setEvaluator(evaluator) .setEstimatorParamMaps(paramGrid) .setNumFolds(10)The result of the transform operation only contain the labels, probabilities, and predictions.It is possible to obtain a "best model" but I rather would be interested in getting the evaluation metrics.Herehttps://jaceklaskowski.gitbooks.io/mastering-apache-spark/content/spark-mllib/spark-mllib-evaluators.htmlthey show how to use an evaluator without a pipeline.None of the very interesting links seem to use theevaluator.https://benfradet.github.io/blog/2015/12/16/Exploring-spark.ml-with-the-Titanic-Kaggle-competition, herehttps://developer.ibm.com/spark/blog/2016/02/22/predictive-model-for-online-advertising-using-spark-machine-learning-pipelines/or in the official exampleshttps://github.com/apache/spark/blob/39e2bad6a866d27c3ca594d15e574a1da3ee84cc/examples/src/main/scala/org/apache/spark/examples/ml/ModelSelectionViaCrossValidationExample.scalais the result of the Evaluator displayed at last.In fact one of the links calculates the metric by hand:cvAccuracy = cvPrediction.filter(cvPrediction['label'] == cvPrediction['prediction']).count() / float(cvPrediction.countI would have expected to obtain the metrics on a perf fold level or possibly a mean / variance.
Spark pipeline evaluation
Yes, there is. There is an ability to directly typecast a single value to an array. If that value is already a[Object[]]nothing is effectively changed.[Array]$result=<pipeline_statement>As mentioned in comments by @wannabeprogrammer, this does not convert null value into an array. If something can return null, the following addition will remedy the situation:if ($result -eq $null) { $result = @() }ShareFolloweditedJul 10, 2015 at 16:06answeredJul 10, 2015 at 14:27VesperVesper18.7k66 gold badges3939 silver badges6262 bronze badges21Hmm... that's a nice way, with caveat that it will generate$nullinstead of zero length array if the pipeline results in zero object.–tkokasihJul 10, 2015 at 15:26@wannabeprogrammer Well, if I do[array]$a=$null; $a.lengthI get 0. So, if needed, one can check length prior to running any foreach code.$a | foreach { "b" }results in one iteration with null supplied as$_. Still, if one is aware that something can return null, they can writeif ($a -eq $null) { $a=@() }–VesperJul 10, 2015 at 16:02Add a comment|
Is there any nice way to ensure a pipeline result is always an array without the array literal@()?Currently, I always found myself writing pipeline and assume that the result is an array, e.g.$Results = $ResultFiles | Where Name -like $Pattern | Sort -Unique Name # Processing that assume $Results is array, e.g. Out-Host -InputObject "found $($Results.Length) matching files..." # Further processing that assume $Results is arrayThen, I realize that I need to ensure$Resultsis always an array. So, I come back to add the magic@(),@(in front and)at the back:$Results = @( <pipeline_statement> )or even more magic by adding to an empty array$Results = @() + <pipeline_statement>My question: is there any way to ensure that a pipeline always results in an array? that doesn't requires the "magic"@()? I think of creating a function to collect pipeline results, e.g.ConvertTo-Array, like:$Results = <pipeline_statement> | ConvertTo-ArrayBut I'd rather use default Cmdlet or idioms if any.Note: I also tempted to createConvertTo-CustomObjectas I often found myself creatingPSCustomObjectfrom hash table.
Ensure pipeline always results in array without using @()?
You could use the-matchoperator on the entire resulting collection:$ci = Get-ChildItem | Select-Object -ExpandProperty "Name" $ci -match "Desktop"Now, the last statement will return all strings that match "Desktop". If no match is found, nothing is returned.So now we can do (in PowerShell 3.0 and above):$ci -match "Desktop" -as [bool]and have it returnTruewhen one or more items match "Desktop".And in one-liner format:(ls|select -exp Name)-match"Desktop"-as[bool]ShareFollowansweredDec 31, 2014 at 20:40Mathias R. JessenMathias R. Jessen165k1212 gold badges159159 silver badges216216 bronze badgesAdd a comment|
A command outputs multiple lines of string, and I want to determine if any of those lines matches a string I'm looking for.Example:Get-ChildItem | Select -ExpandProperty "Name" | %{ $_ -Match "Desktop" }returns false for each folder except for the one named "Desktop".Is it possible simply return oneTrueif any of those folders have "Desktop" in its name, and oneFalseotherwise? I'm thinking something like:Get-ChildItem | Select -ExpandProperty "Name" | <aggregate_cmdlet> | %{ $_ -Match "Desktop" }
Is it possible to get the entire output of a Powershell command before sending it down the pipe?
To find enty point look into each shared object with:nm $library | egrep "T main$"Library withmain()will output something like090d8ab0 T mainVery usefull way to visualize execution tree is to run:valgrind --tool=callgrind ./my_executable -arg -arg ....(you can abort execution early withCtrl+C)This will outputcallgrind.<pid>file. To visualize it runkcachegrind callgrind.<pid>.You will needvalgrind:sudo apt-get install valgrindandkcachegrindsudo apt-get install kcachegrindShareFollowansweredApr 2, 2014 at 7:32GreenScapeGreenScape7,48122 gold badges3535 silver badges6666 bronze badges2Thank you for the answer. I have already found the shared object containing the main(). Now the problem is how to find the execution pipeline from the main(). I will try your suggestion and let you know of the results.–zindarodApr 2, 2014 at 7:34Thank you so much for suggestingvalgrind. I may not get the original main() code back but this is the closest thing without the need for disassembling the machine code.–zindarodApr 2, 2014 at 8:15Add a comment|
I have a C++ project in Ubuntu 12.04. To run the project the make file requires the following files:1-All the .cpp files2-All the .h files3-Three shared libraries.The project is fully functionall and performs according to the specifications. All the required .cpp files and .h files are available. The problem is that there is no main() function in any of the source files and the program entry point resides in one of the three shared libraries. My job is to find out the program execution pipeline and without having any main file I am not able to do that. I can't run the project in any IDE (i.e: eclipse) because there is no main function available.Question: Can you please tell me how to find the program entry point?P.S: I will be glad to provide any kind of information or material you may need to solve my problem.Edit: The CMakeLists.txt file availablehere.Edit 2: The build.sh file availablehere.
Can't find program entry point in a C++ project
Readthe Wiki Pageabout transactions on GitHub.Especially, this exampleint callbackResult; using (var trans = redis.CreateTransaction()) { trans.QueueCommand(r => r.Increment("key")); trans.QueueCommand(r => r.Increment("key"), i => callbackResult = i); trans.Commit(); } //The value of "key" is incremented twice. The latest value of which is also stored in 'callbackResult'.There is a virtual method with a callback, that will give you the result.public virtual void QueueCommand(Func<IRedisClient, string> command, Action<string> onSuccessCallback, Action<Exception> onErrorCallback).ShareFollowansweredNov 25, 2013 at 15:10CybermaxsCybermaxs24.5k88 gold badges8484 silver badges112112 bronze badges11Thanks a lot, it works. But if I want to get all the results of many QueueCommand, can I only do like this and collect them all? And I'm not sure if commit and flush is synchronous or not, will my main thread be blocked when executing those commands?–daisydanngoNov 26, 2013 at 3:17Add a comment|
The question is, I want to get the result of the queuecommand after the pipeline flush, however I don't know how to get the result by using the servicestack redisfor example:pipeline.QueueCommand(r => r.Get<string>("foo")); pipeline.Flush();where should I get the result of "foo", so that I can pass back the result to others?
How to get result from pipeline by using service.stack.redis
Using Compose, and calling the resulting function gives this:"%|>%" <- function(...) Compose(...)()Now get rid of the 'x' as the final "function" (replaced with an actual function, that is not needed but here for example):anonymousPipelineTest <- function(x){x^2} %|>% function(x){x+5} %|>% function(x){x} anonymousPipelineTest(1:10) [1] 6 9 14 21 30 41 54 69 86 105ShareFollowansweredNov 17, 2012 at 18:52Matthew LundbergMatthew Lundberg42.3k66 gold badges9191 silver badges112112 bronze badges0Add a comment|
I have a question which is an extension ofanother question.I am wanting to be able to pipeline anonymous functions. In the previous question the answer to pipeline defined functions was to create a pipeline operator "%|>%" and to define it this way:"%|>%" <- function(fun1, fun2){ function(x){fun2(fun1(x))} }This would allow you to call a series of functions while continually passing the result of the previous function to the next. The caveat was that the functions to to be predefined. Now I'm trying to figure how to do this with anonymous functions. The previous solution which used predefined functions looks like this:square <- function(x){x^2} add5 <- function(x){x + 5} pipelineTest <- square %|>% add5Which gives you this behviour:> pipelineTest(1:10) [1] 6 9 14 21 30 41 54 69 86 105I would like to be able to define thepipelineTestfunction with anonymous functions like this:anonymousPipelineTest <- function(x){x^2} %|>% function(x){x+5} %|>% xWhen I try to call this with the same arguments as above I get the following:> anonymousPipelineTest(1:10) function(x){fun2(fun1(x))} <environment: 0x000000000ba1c468>What I'm hoping to get is the same result aspipelineTest(1:10). I know that this is a trivial example. What I'm really trying to get at is a way to pipeline anonymous functions. Thanks for the help!
R Pipelining with Anonymous Functions
Try combining the commands using&&, so that the 2nd one runs only after the 1st one completes successfully.system("(nohup wget $file && ./myscript.sh $file >> output.txt) &");ShareFollowansweredApr 16, 2010 at 6:04codaddictcodaddict450k8282 gold badges496496 silver badges531531 bronze badgesAdd a comment|
I have a list of files URLS where I want to download them:http://somedomain.com/foo1.gz http://somedomain.com/foo2.gz http://somedomain.com/foo3.gzWhat I want to do is the following for each file:Download foo1,2.. in parallel withwgetandnohup.Every time it completes download process them withmyscript.shWhat I have is this:#! /usr/bin/perl @files = glob("foo*.gz"); foreach $file (@files) { my $downurls = "http://somedomain.com/".$file; system("nohup wget $file &"); system("./myscript.sh $file >> output.txt"); }The problem is that I can't tell the above pipeline when does the file finish downloading. So now it myscript.sh doesn't get executed properly.What's the right way to achieve this?
Pipeline For Downloading and Processing Files In Unix/Linux Environment With Perl
you can used:GITLAB_USER_ID- The ID of the user who started the job.GITLAB_USER_LOGIN- The username of the user who started the job.GITLAB_USER_NAME- The name of the user who started the job.reference:Predefined variables reference | GitLabShareFollowansweredSep 27, 2022 at 1:48Mouson ChenMouson Chen1,06688 silver badges1515 bronze badgesAdd a comment|
I am configuring a GitLab pipeline to build my project, in that I need a way by which I can retrieve the user id of the one who triggered the pipeline. Is there any default variable which could help me to retrieve that value. I did tried CI_PIPELINE_USER_ID but it didn't return anything.
How to get the user id who triggers the pipeline in GitLab?
No, it's not.Pipeline components are expanded, redirected, and executed in parallel. In your example,lsmay outruncat > fileand retrieve the list of files beforefileis created, and vice versa. You can't rely oncat > filewinningthe raceevery time.In order to ensure that the redirection is performed before any component of the pipeline is executed, you need to enclose the pipeline in braces, and move the redirection outside:{ ls | cat | cat | cat; } > file cat fileShareFolloweditedAug 25, 2021 at 4:41answeredAug 21, 2021 at 10:35oguz ismailoguz ismail47k1616 gold badges5151 silver badges7272 bronze badgesAdd a comment|
For example, consider invoking this shell command in an empty directory:ls | cat | cat | cat > file; cat fileWhen I actually tested, the result was always:fileBut is this behaviour guaranteed by POSIX? I read section 2.7 redirection and 2.9.2 pipeline from the opengroup POSIX standard page, but I couldn't find anything mentioned about this.And there was one statement that seems relevant, but I couldn't really understand what it means since I'm not native English user. The statement was this:The standard input, standard output, or both of a command shall be considered to be assigned by the pipeline before any redirection specified by redirection operators that are part of the command (see Redirection).
Is it guaranteed that POSIX shell always open files for redirection before executing any commands in pipeline?
BatchElementsis non-deterministic and doesn't batch things across bundles. The direct runner is really simple and puts the entire PCollection into a single bundle, but Dataflow is written as a distributed runner and even if there is only one worker, there may be multiple bundles running concurrently (e.g. on different threads) and bundles tend to be fairly small.You could look into using Beam'sGroupIntoBatcheswhich works better in streaming mode (though that requires choosing a key within which things are batched).ShareFollowansweredJul 7, 2021 at 6:21robertwbrobertwb4,9861919 silver badges2121 bronze badges3Thanks, it helps! Just have a concern about GroupIntoBatches because it is written that it is experimental.–Roma DJul 7, 2021 at 10:091It should be fine to use. Beam is notoriously bad at removing its experimental annotations, but that's not going away.–robertwbJul 8, 2021 at 0:03Many thanks for your answer, I've going insane (same issue with Java/Kotlin).–Sigal ShaharabaniJun 23, 2022 at 8:47Add a comment|
I'm building a pipeline with apache beam (GCP Dataflow) and python, my pipeline looks like this:... with beam.Pipeline(options=self.pipeline_options) as pipeline: somepipeline = ( pipeline | "ReadPubSubMessage" >> ReadFromPubSub( subscription=self.custom_options.some_subscription) | "Windowing" >> beam.WindowInto(beam.window.FixedWindows(30)) | "DecodePubSubMessage" >> beam.ParDo(DecodePubSubMessage()).with_outputs(ERROR_OUTPUT_NAME, main=MAIN_OUTPUT_NAME) | "Geting and sorting listings" >> beam.ParDo(SortByCompletion()) | "Batching listings" >> beam.BatchElements(min_batch_size=3,max_batch_size=3) | "Print logs" >> beam.Map(logging.info) ) ...And everything works as expected when I run pipeline via DirectRunner (you can see 1 batch with 3 elements inside):But when I run the same code with a DataflowRunner I'm getting this result (you can see 3 batches with 1 element inside of each batch):This happens even when I'm running this pipeline in parallel (in two terminal windows). Both were run with a streaming flag. Messages were sent to pubsub via python script one by one immediately.Question:What can cause this problem with DataflowRunner (my assumption was the number of workers in dataflow but when I checked it there was only 1 worker in this job) and how I can get the same result as via DirrectRunner.Thank you!
Batching with BatchElements works differently in DirectRunner and DataflowRunner (GCP/Dataflow)
Import argv from sys and then useargv[1]to get the parameter in databricks activity.ShareFolloweditedMar 2, 2023 at 16:12Mark Rotteveel105k207207 gold badges148148 silver badges204204 bronze badgesansweredMay 28, 2021 at 9:28Steve JohnsonSteve Johnson8,32511 gold badge77 silver badges1818 bronze badges1any way of sending parameters as keywords and not as positional?–habarnamMay 1, 2023 at 6:46Add a comment|
I am building an Azure Data Factory pipeline and I would like to know how to get thisparameterinto the python script.The python script is located in Databricks (DBFS) and is run from Azure DataFactory. So, in my ADF pipeline, I have some parameters which I'd like to introduce and use them insinde the python script.Any idea on how does it work?
How to pass parameter to python script from an Azure Data Factory pipeline
If I am reading this code right,parBuffer nonly sparks the firstnelements -- all the rest are evaluated in the usual Haskell way.parBuffer :: Int -> Strategy a -> Strategy [a] parBuffer n strat = parBufferWHNF n . map (withStrategy strat) parBufferWHNF :: Int -> Strategy [a] parBufferWHNF n0 xs0 = return (ret xs0 (start n0 xs0)) where -- ret :: [a] -> [a] -> [a] ret (x:xs) (y:ys) = y `par` (x : ret xs ys) ret xs _ = xs -- start :: Int -> [a] -> [a] start 0 ys = ys start !_n [] = [] start !n (y:ys) = y `par` start (n-1) ysNote in particular thatstart 0 ys = ysand not, say,start 0 ys = evaluateThePreviousChunk `pseq` start n0 ysor something that would start up more sparks. The documentation definitely doesn't make this clear -- I don't think "rolling buffer strategy" obviously implies this behavior, and I agree it's a bit surprising, to the point that I wonder whether this is just a bug in theparallellibrary that nobody caught yet.You probably wantparListChunkinstead.ShareFolloweditedMar 23, 2021 at 20:16answeredMar 23, 2021 at 20:10Daniel WagnerDaniel Wagner148k99 gold badges224224 silver badges385385 bronze badges0Add a comment|
I'm working with this code I wrote, and for some reason threadscope keeps telling me that it's almost never using more than one core at a time. I think the problem is that in order to get the second line it needs to fully evaluate the first line, but I can't figure out an easy way to get it to read in 11 lines at a time.module Main where import Control.Parallel import Control.Parallel.Strategies import System.IO import Data.List.Split import Control.DeepSeq process :: [String] -> [String] process lines = do let xs = map (\x -> read x :: Double) lines ys = map (\x -> 1.0 / (1.0 + (exp (-x)))) xs retlines = map (\x -> (show x ) ++ "\n") ys retlines main :: IO () main = do c <- getContents let xs = lines c ys = (process xs) `using` parBuffer 11 rdeepseq putStr (foldr (++) [] ys)
How do you parallelize lazily read information from stdin in Haskell?
UseFeatureUnionand probablyColumnTransformer, e.g.union = FeatureUnion([("MinMax", MinMaxScaler()), ("SS", StandardScaler()), ("Log", FunctionTransformer(np.log1p)]) proc = ColumnTransformer([('trylots', union, ['Value_In_Dollars'])], remainder='passthrough')ShareFollowansweredJun 7, 2020 at 3:54Ben ReinigerBen Reiniger11.4k33 gold badges1818 silver badges3232 bronze badges5Nice, I'm curious how this all would look wrapped up into one pipeline. Would that work?–Kaleb CoberlyApr 26, 2021 at 20:41And, is there a way to create custom transforms that create a new feature from within a single pipeline? Something likepipeline([('make_bin', lambda x: 0 if x < 5 else 1)]).–Kaleb CoberlyApr 26, 2021 at 20:501To your first question, yes, you can useprocas a step in a pipeline. Your second seems like it should be a new Question, except that it's readily answered by the documentation: have a look atFunctionTransformer, possibly together with the ideas here.–Ben ReinigerApr 26, 2021 at 21:23Thanks! I will check that out. I figured I would find it with more searching, but I thought it might be useful to the community to put a bread crumb here, too.–Kaleb CoberlyApr 27, 2021 at 1:29when usinglambda: ...in pipeline steps, may run into issues when the trained model needs to be persisted using pickle-like methods as those don't play well with lambda functions–JustasJan 12, 2023 at 21:24Add a comment|
Say I have a dataset with a bunch of numerical features. I'm not sure what's the best way to use the numerical features in a model so I decide to apply different transformations to them and add those results to the dataset. These transformation could be MinMax Scaling, StandardScaling, LogTransform, ... whatever you can think of.So basically, in the raw data I might only have the feature "Value_in_Dollars" and after all transformations I also want to have the transformed features in the dataset:"Value_in_Dollars_MinMax", "Value_in_Dollars_SS", "Value_in_Dollars_Log"in addition to the original column.I know how to do this manually but how would I do this in a Sklearn pipeline? It this even possible?
Sklearn Pipeline to add new features