Response
stringlengths 8
2k
| Instruction
stringlengths 18
2k
| Prompt
stringlengths 14
160
|
---|---|---|
Check if below ADF's built-in capabilities can fulfill your need:HTTP connectorin copy activity to retrieve data from a web endpoint and copy to any supported sink data store (ADF v1 and v2):https://learn.microsoft.com/en-us/azure/data-factory/connector-httpWeb activityto invoke api request (ADF v2 only):https://learn.microsoft.com/en-us/azure/data-factory/control-flow-web-activityIfCondition activityto do branching (ADF v2 only): not yet well documented but you can get a sense from the SDK referencehttps://learn.microsoft.com/en-us/dotnet/api/microsoft.azure.management.datafactory.models.ifconditionactivity?view=azure-dotnet
|
I am working with Azure Data Factory pipe line as below.Pipeline activity Logins to address by webapi in post. (200 or 400)Pipeline activity connects to address by webapi to get json data.To achieve above. I believe pipe line has 2 activities of 1 and 2 above.
Result of 1 can be either 200 or 400. (200: to 2/ 400: error)How Can activity 1 detect 400 error?
Do i need to implement .net CustomActivity?
|
What is technology for Azure Data Factory pipeline to Login certain URL
|
There is an option to set maximum number of runs per dag in the global airflow.cfg file. The parameter to set is max_active_runs_per_dag.
|
When i backfill a dag for specific dates, i want to run it by sequentially, i.e. i want it to be run day by daycompleting all the tasks for the specific day and then next day so on.. I have used the depends_on_past argument, but it is only helping me to set the dependency on tasks not in dag runs.example :- Dag_A have 4 tasks , i use back fill with depends_on_past,
After executing first task in Dag_A (first day) it triggers first task of Dag_A (second day), I dont want it
|
airflow backfilling dag run dependancy
|
Well,we have both the approaches in place. The initial AWS provisioning has the last step of a null resource which runs an ansible which does the initial code deployment.Subsequent code deployments are done with standalone jenkins+ansible jobs.
|
We are in the process of setting up a new release process in AWS. We are using terraform with Elastic Beanstalk to spin up the hardware to deploy to (although actual tools are irrelevant).As this elastic beanstalk does not support immutable deployments in windows environments we are debating whether to have a separate pipeline to deploy our infrastructure or to run terraform on all code deployments.The two things are likely to have different rates of churn which feels like a good reason to separate them. This would also reduce risk as there is less to deploy. But it means code could be deployed to snowflake servers and means QA and live hardware could get out of sync and therefore we would not be testing like for like.Does anyone have experience of the two approaches and care to share which has worked better and why?
|
Infrastructure and code deployment in same pipeline or different?
|
It does decrement if you don't use a pipeline (and avoid a sub shell forking):x=10
f() {
if ((x)); then
echo $((x--))
f
fi
}Then call it as:fit will print:10
9
8
7
6
5
4
3
2
1Since decrement is happening inside the subshell hence current shell doesn't see the decremented value ofxand goes in infinite recursion.EDIT:You can try this work around:x=10
f() {
if ((x)); then
x=$(tr 0-9 A-J <<< $x >&2; echo $((--x)))
f
fi
}
fTo get this output:BA
J
I
H
G
F
E
D
C
B
|
The variablexin the first example doesn't get decremented, while in the second example it works. Why?Non working example:#!/bin/bash
x=100
f() {
echo $((x--)) | tr 0-9 A-J
# this also wouldn't work: tr 0-9 A-J <<< $((x--))
f
}
fWorking example:#!/bin/bash
x=100
f() {
echo $x | tr 0-9 A-J
((x--))
# this also works: a=$((x--))
f
}
fI think it's related to subshells since I think that the individual commands in the pipeline are running in subshells.
|
Bash variable not decrementing in a pipeline
|
I've done as below. It worksI have altered functionclass Cat(TransformerMixin):
def transform(self, X, y=None, **fit_params):
enc = DictVectorizer(sparse = False)
encc = enc.fit(df[['c', 'd']].T.to_dict().values())
enc_data = encc.transform(X.T.to_dict().values())
return enc_data
def fit_transform(self, X, y=None, **fit_params):
self.fit(X, y, **fit_params)
return self.transform(X)
def fit(self, X, y=None, **fit_params):
return selfAnd a new datasetcontrol = pd.DataFrame({'c':['b'], 'd': ['f']})
pred = pipeline.predict(control)
pred
|
I try to build a pipeline with categorical variablesimport numpy as np
import pandas as pd
import sklearn
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn import linear_model
from sklearn.pipeline import Pipeline
df = pd.DataFrame({'a':range(6), 'c':['a', 'b', 'c']*2, 'd': ['m', 'f']*3 })
X = df[['c', 'd']]
y = df['a']
regressor = linear_model.SGDRegressor()Transform categorical variablesclass Cat(TransformerMixin):
def transform(self, X, **transform_params):
enc = DictVectorizer(sparse = False)
enc_data = enc.fit_transform(X.T.to_dict().values())
return enc_data
def fit(self, X, y=None, **fit_params):
return selfPipelinepipeline = Pipeline([
('categorical', Cat()),
('model_fitting', regressor),
])
pipeline.fit(X, y)That's correct. But i get error when i try to fit a new dataset. For examplecontr = pd.DataFrame({'c':['a'], 'd': ['m']})
pred = pipeline.predict(contr)
pred
and
ValueError: shapes (1,2) and (5,) not aligned: 2 (dim 1) != 5 (dim 0)Isee that the problem in class Cat(TransformerMixin). How to improve it?
|
Categorical variables in pipeline: dimension mismatch
|
You're not meant to call the estimator methods on the class directly, you're meant to call it on a class instance; this is because estimators often to have some type of stored state (the model coefficients for example):u = UnderSampling()
a,b = u.fit(X, y)
a,b = u.fit_transform(X, y)
|
Given some fake data:X = pd.DataFrame( np.random.randint(1,10,28).reshape(14,2) )
y = pd.Series( np.repeat([0,1], [10,4]) ) # imbalanced with more 0s than 1sI write a sklearn fit-transformer that under-samples the majority of y to match the length of the minority label. I want to use it in a pipeline.from sklearn.base import BaseEstimator, TransformerMixin
class UnderSampling(BaseEstimator, TransformerMixin):
def fit(self, X, y): # I don't need fit to do anything
return self
def transform(self, X, y):
is_pos = y == 1
idx_pos = y[is_pos].index
random.seed(random_state)
idx_neg = random.sample(y[~is_pos].index, is_pos.sum())
idx = sorted(list(idx_pos) + list(idx_neg))
X_resampled = X.loc[idx]
y_resampled = y.loc[idx]
return X_resampled, y_resampled
def fit_transform(self, X, y):
return self.transform(X,y)Most unfortunately, I cannot use it in a pipeline.from sklearn.pipeline import make_pipeline
us = UnderSampling()
rfc = RandomForestClassifier()
model = make_pipeline(us, rfc)
model.fit(X,y)How can I make this pipeline work?
|
How to write a fit_transformer with two inputs and include it in a pipeline in python sklearn?
|
You should only use the Ruffus decorators when there are input/outputfiles. For example, iftask1generatesfile1.txtand this is the input fortask2, which generatesfile2.txtthen you could write a pipeline as follows:@originate('file1.txt')
def task1(output):
with open(output,'w') as out_file:
# write stuff to out_file
@follows(task1)
@transform(task1, suffix('1.txt'),'2.txt')
def task2(input_,output):
with open(input_) as in_file, open(output,'w') as out_file:
# read stuff from in_file and write stuff to out_fileIf you just want to take a dictionary as an input, you don't need Ruffus, you can just order the code appropriately (as it will run sequentially) or calltask1intask2:def task1():
properties = {'status': 'original'}
return properties
def task2():
properties = task1()
properties['status'] = 'updated'
return properties
|
I'd like to create a pipeline with the Ruffus package for Python and I am struggling with its simplest concepts. Two tasks should be executed one after the other. The second task depends on output of the first task. In Ruffus documentation everything is designed for import/export from/to external files. I'd like to handle internal data types like dictionaries.The problem is that @follows doesn't take inputs and @transform doesn't take dicts. Am I missing something?def task1():
# generate dict
properties = {'status': 'original'}
return properties
@follows(task1)
def task2(properties):
# update dict
properties['status'] = 'updated'
return propertiesEventually the pipeline should combine a set of functions in a class that update the class object on the go.
|
Ruffus pipeline with internal inputs
|
The way I usually do it is with aFeatureUnion, using aFunctionTransformerto pull out the relevant columns.Important notes:You have to define your functions withdefsince annoyingly you can't uselambdaorpartialin FunctionTransformer if you want to pickle your modelYou need to initializeFunctionTransformerwithvalidate=FalseSomething like this:from sklearn.pipeline import make_union, make_pipeline
from sklearn.preprocessing import FunctionTransformer
def get_text_cols(df):
return df[['name', 'fruit']]
def get_num_cols(df):
return df[['height','age']]
vec = make_union(*[
make_pipeline(FunctionTransformer(get_text_cols, validate=False), LabelEncoder()))),
make_pipeline(FunctionTransformer(get_num_cols, validate=False), MinMaxScaler())))
])
|
I am pretty new to pipelines in sklearn and I am running into this problem: I have a dataset that has a mixture of text and numbers i.e. certain columns have text only and rest have integers (or floating point numbers).I was wondering if it was possible to build a pipeline where I can for example callLabelEncoder()on the text features andMinMaxScaler()on the numbers columns. The examples I have seen on the web mostly point towards usingLabelEncoder()on the entire dataset and not on select columns. Is this possible? If so any pointers would be greatly appreciated.
|
Sklearn Transform Different Columns Differently In Pipeline - Ex: X[col1] gets tfidf, X[col2] gets label encoding? [duplicate]
|
I think I can answer myself. What I did is:1) I created an interface IProcessor with method Process()
2) wrapped AlbumProcessing and PhotoProcessing with interface IProcessor
3) Created one ActionBlock that takes IProcessor as Input and executes Process method.4) At the end of processing Album I am adding processing of all photos to ActionBlock.This fulfills my requiremens in 100%. Maybe someone has some other solution?
|
I have a question about implementing pipeline using Dataflow TPL library.My case is that I have a software that needs to process some tasks concurrently.
Processing looks like this: first we process album at global level, and then we go inside album and process each picture individually. Let's say that application has got processing slots and they are configurable (for the sake of example assume slots = 2). This means that application can process either:a) two albums on the same timeb) one album + one photo from different albumc) two photos on the same time for same albumd) two photos on the same time for different albumsCurrently I implemented process like this:var albumTransferBlock = new TransformBlock<Album, Album>(ProcessAlbum,
new ExecutionDataflowBlockOptions { MaxDegreeOfParallelism = 2 });
ActionBlock<Album> photoActionBlock = new ActionBlock<Album>(ProcessPhoto);
albumTransferBlock.LinkTo(photoActionBlock);
Album ProcessAlbum(Album a)
{
return a;
}
void ProcessPhoto(Album album)
{
foreach (var photo in album)
{
// do some processing
}
}The problem I have is that when I process 1 album at the time, application will never use two slots for processing photos. It meets all requirement except c)Can anyone help me to solve this issue using DataFlow TPL?
|
Dataflow TPL Implementing pipeline with precondition
|
You can serialize the object using Newtonsoft Json.
See (https://azure.microsoft.com/en-us/documentation/articles/data-factory-create-data-factories-programmatically/) for how to connect via the ADF SDKvar aadTokenCredentials = new TokenCloudCredentials(ConfigurationManager.AppSettings["SubscriptionId"], GetAuthorizationHeader());
var resourceManagerUri = new Uri(ConfigurationManager.AppSettings["ResourceManagerEndpoint"]);
var manager = new DataFactoryManagementClient(aadTokenCredentials, resourceManagerUri);
var pipeline = manager.Pipelines.Get(resourceGroupName, dataFactoryName, pipelineName);
var pipelineAsJson = JsonConvert.SerializeObject(pipeline.Pipeline, Formatting.Indented);I was expecting something more complex but looking at the sdk source GitHub it is not doing anything special.
|
I want to track pipeline changes in source control, and I'm looking for a way to programmatically retrieve the json representation from the ADF.The .Net routines return the objects, but sadly ToString() does not return json (wouldn't THAT be convenient?), so right now I'm looking at copying the json down by hand (shoot me now!), or possibly trying to recreate the json from the .Net objects (shoot me later!).Please tell me I'm being dense and there is an obvious way to do this.
|
How do I retrieve the json representation of an azure data factory pipeline?
|
The Deploy to Bluemix capability is the only way to generate a Bluemix project's pipeline from a YAML file at this time.
|
When we check in the code (.bluemix/pipeline.yml) to the IBM JazzHub project on Bluemix, it doesn't get added automatically to the pipeline as stages (as mentioned in the pipeline.yml) to the parent project. Any project that gets cloned through the "Deploy to bluemix" is seen with the added pipeline instructions (fetched from yml) without manually adding it.How can I add the pipeline instructions to the parent project itself with something like import, or through command line or thru just checking in the pipeline.yml?
|
Why IBM Bluemix DevOps pipeline stage using .bluemix/pipeline.yml file's stage instructions added only when we deploy a cloned project?
|
Just re-build build#20, call it build#22, and deploy it.
|
I am using Jenkins to set-up a DevOps pipeline for a Java project. And am stuck at one scenario of deployment. If somehow there has to be roll-back of build on Tomcat server and build that has to replace this one has to be last stable build.
Suppose build#20 is deployed on server which is stable and build#21 is deployed in next build cycle but after deployment its found that this new build has issues. Now i want to replace this build by the previous build that is build#20.
The plugin I am using on Jenkins doesn't provide the facility of roll-back. Please help me out.
Plugin for deployment :https://wiki.jenkins-ci.org/display/JENKINS/Deploy+Plugin
|
How to rollback on Tomcat in case of unstable build using Jenkins?
|
The way I have solved this in the past is to use theDataobject in an exception to pass private data between the layers.try
{
}
catch (SomeSpecificException spex)
{
var exception = new Exception();
exception.Data.Add("Something", "Specific");
throw exception;
}Basically, in my adapter layers I have code that converts any specific exceptions into general ones. Then in the adapter layer on the other side, I can inspect the data object and convert that to an exception that is usable by its callers.
|
I am building a MAF Pipeline which an add-in can also use for callbacks to the host system to use some service from the host. Those methods can throw exceptions which should be handled by the add-in. Handle it that case should not only mean to just catch them but to also analyze them.As always there are two options to get objects across the appdomain border: by serializing them or by extendingMarshalbyRefObject.Actually I am having problems with both options:When I am using serialization, then my add-ins needs to know the exact types of the exceptions as it else can't deserialize the exception. That means that I can't work here on an abstraction layer. The Exception class itself is marked as serializable, so also all subclasses need to be marked as serialable so that this works. For me this is not really a solutions as I can't isolate the types between host and add-in (as I can't work on abstractions)Using MarshalbyRefObject won't work as well as all Exception need to extend "Exception" and therefore can't extend MarshalbyRefObject.Is there any standard pattern which could solve this problem?
|
How to create pipeline items for exceptions
|
I found an answer to this question.I can transform my RDD (parsedData) to SchemaRDD which is a sequnce of LabeledDocuments by the following code:val rddSchema = parsedData.toSchemaRDD;Now the problem is changed! I want to split the newrddSchemato training (80%) and test (20%). If I userandomSplit, it returns aArray[RDD[Row]]instead ofSchemaRDD.New problem: How to transformArray[RDD[Row]]toSchemaRDD-- OR -- how to splitSchemaRDD, in which the results beSchemaRDDs?
|
I want to use the implementation of pipeline in MLlib. Before, I had a RDD file and pass it to the model creation, but now to use pipeline, there should be sequence of LabeledDocument to be passed to the pipeline.I have my RDD which is created as follows:val data = sc.textFile("/test.csv");
val parsedData = data.map { line =>
val parts = line.split(',')
LabeledPoint(parts(0).toDouble, Vectors.dense(parts.tail))
}.cache()In the pipeline exampleSpark Programming Guide, the pipeline needs the following data:// Prepare training documents, which are labeled.
val training = sparkContext.parallelize(Seq(
LabeledDocument(0L, "a b c d e spark", 1.0),
LabeledDocument(1L, "b d", 0.0),
LabeledDocument(2L, "spark f g h", 1.0),
LabeledDocument(3L, "hadoop mapreduce", 0.0),
LabeledDocument(4L, "b spark who", 1.0),
LabeledDocument(5L, "g d a y", 0.0),
LabeledDocument(6L, "spark fly", 1.0),
LabeledDocument(7L, "was mapreduce", 0.0),
LabeledDocument(8L, "e spark program", 1.0),
LabeledDocument(9L, "a e c l", 0.0),
LabeledDocument(10L, "spark compile", 1.0),
LabeledDocument(11L, "hadoop software", 0.0)))I need a way to change my RDD (parsedData) to sequence of LabeledDocuments (like training in the example).I appreciate your help.
|
Spark: How to transform a RDD to Seq to be used in pipeline
|
use the string replace function of mysqlSELECT REPLACE('www.mysql.com', 'w', 'Ww');
-> 'WwWwWw.mysql.com'or check out another function you could need:https://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_replaceIn your case you might want to replace | with \n or \r.SELECT REPLACE(column, '|', '\r');
|
I have a column which returns pipeline delimited records as below:'fromState=A|Count=5|highLimit=B|status=C|presentValue=D|alarmValue=E'Is there a way to have it as a more formatted output like:'fromState=A
Count=5
highLimit=B
status=C
presentValue=D
alarmValue=E'i.e.,pipelines are replacedeach value is on a new line (optional)
|
How do I delimit a fetched data record on MySQL?
|
Make sure that thelsyou are running from the shell and thelsthat is running in your program are the same program.Your program is specifying/bin/lsas the program to run; you can find out what is being run when you type the command at the shell prompt by using the shell commandwhich ls(also seetype ls).If these are different it could be due to the POSIX vs. GNU blocksize used in the total size computation.
|
*Edit - Stephen has answered this question in the comments below *so basically I have made two seperate child processes(using two seperate methods with their own fork) to execute the command ls -la | less using pipe.The first one executes ls like this:execl("/bin/ls", "ls", "-la", NULL);The second child process executes less like this:execlp("less", "less", NULL);And the results come up fine.. apart from one little part:Results using shell command:total 15
drwxr-xr-x 2 daniel staff 4 2015-02-27 18:58 .
drwxr-xr-x 15 daniel staff 24 2015-02-27 18:58 ..
-rwxr-xr-x 1 daniel staff 9280 2015-02-27 18:58 pipes
-rw-r--r-- 1 daniel staff 1419 2015-02-27 18:58 pipes.cResults using my executable:total 30
drwxr-xr-x 2 daniel staff 4 Feb 27 18:58 .
drwxr-xr-x 15 daniel staff 24 Feb 27 18:58 ..
-rwxr-xr-x 1 daniel staff 9280 Feb 27 18:58 pipes
-rw-r--r-- 1 daniel staff 1419 Feb 27 18:58 pipes.cNow the date being a different format I don't care about..but the total size is twice as large with my executable(30 vs 15). Why is this happening?
|
Unix - pipeline ls - la | less C executable giving double total file size vs shell
|
You are right, there would be two bubbles.Assumingdata forwarding:1. IF ID EX MM WB
2. IF S S ID EX MM WB(S means stall or bubble).There isno way you can 'fix' itbecause in any case you have to wait for the end of theMMstage to have the value at 1000($6). It could even be worse without data forwarding where you would have to wait until theWBstage, meaning 3 stalls.Only way topreventsuch behaviour is to have smart compilers that will schedule those two instructions in a different way (i.espacethem by adding others in-between).Note that as it is, the program hasno real purpose(get value at memory address [1000+Regs[$6]], and copy it at address [2000+Regs[$6]])
|
I have been working on some low level programming with 5 stage pipelining. But I hit a snag.Assuming this diagramhttps://i.stack.imgur.com/Sbe0C.pngand the mips code:lw $4,1000($6)sw $4,2000($6)what would actually happen? I assumed there would be bubbles, i counted two bubbles proceeding the ID stage.Can we fix it by adding inputs to the new forwarding unit? Where can I add mux's and new datapaths to to avoid bubble+errors?
|
Fixing load-use hazard issue in pipeline (MIPS)
|
Create directory before executing rule.DATADIR := $(shell cd DATA; find * -type d)
create_results_dir:= $(shell for i in $(DATADIR); \
do test -d DATA/$$i && mkdir -p RESULTS/$$i; \
done)
all:
@echo do something.
|
I'm using make to write a pipeline for biological data analysis. My project directory is:PROJECT
- DATA
- SAMPLEA
- A1.FASTQ A2.FASTQ
- SAMPLEB
- B1.FASTQ B2.FASTQ
- RESULTS
- SRC
- makefileMy current makefile uses a wildcard to list the directory of all .FASTQ files in the DATA directory. Using pattern rules each .FASTQ file then goes through a series of recipes with the final output file written to the RESULTS directory. Instead, I would like to create a directory for each SAMPLE where the final output file is written:PROJECT/RESULTS/SAMPLEA/A1.out
PROJECT/RESULTS/SAMPLEA/A2.out
PROJECT/RESULTS/SAMPLEB/B1.out
PROJECT/RESULTS/SAMPLEB/B2.outI can do this by having the first recipe make the directory, however this throws an error when the second of the FASTQ files from the same SAMPLE also tries to create the directory. A few posts on stack overflow suggest using the -p flag on mkdir to ignore errors, however this apparently causes problems when I run the makefile in parallel using the -j flag. I thought about forcing a shell script at the start of the makefile to run, to check if the results directories are present, and if not then it should create them, but I'd like to try and solve this issue using just make.
|
Create directory only once when running makefile in parallel
|
I solved that task by using 2 Jenkins extensions:https://wiki.jenkins-ci.org/display/JENKINS/EnvInject+Pluginhttps://wiki.jenkins-ci.org/display/JENKINS/Build+Flow+PluginCreate properties file from test. File contains property, that indicate result of test step statusWith EnvInject Plugin add new step into Jenkins Job (step must be after test run) and inject parameter value (from file created at first step)Create build flow with Build Flow PluginWrite groovy script:smokeTest = build( "Run_Smoke_Test" )
def isTestStepSuccessful = smokeTest.environment.get( "TestStepSuccessful" )
if (isTestStepSuccessful != "false") {
parallel (
{
build("Run_Critical_Path_Part_1_Test")
build("Run_Critical_Path_Part_3_Test")
},
{
build("Run_Critical_Path_Part_2_Test")
}
)
}
build( "Run_Critical_Path_Final_Test" )
|
I have 3 Jenkins jobs. Smoke tests, critical path test (part 1), critical path test (part 2).
Now it starts one by one. I need create build PipeLine depends on test result. I need to take into account the results of a single test (@Test annotation in TestNG), ignoring the overall result of test suite.I want to get the configuration like this:Smoke tests -> If specified test passed then run critical path test Part 1 and Part 2 on different nodesSo, please tell me how depends in Jenkins only on one tests result (not all suite)?
|
How create Jenkins Build Pipeline depends on test result?
|
You can use the org.apache.hadoop.hive.metastore API or HCat API. Here is a simple example of using hive.metastore. You would have to make call to either or before starting on your Pipeline unless you want to join to some Hive partition in the mapper/reducer.HiveMetaStoreClient hmsc = new HiveMetaStoreClient(hiveConf)
HiveMetaStoreClient hiveClient = getHiveMetastoreConnection();
List<Partition> partitions = hiveClient.listPartittions("default", "my_hive_table", 1000)
for(Partition partition: partitions) {
System.out.println("HDFS data location of the partition: " + partition.getSd().getLocation())
}The only other thing you will need is to export the hive conf dir:export HIVE_CONF_DIR=/home/mmichalski/hive/conf
|
I am able to read text files in hdfs into apache crunch pipeline.
But now I need to read the hive partitions.
The problem is that as per our design, I am not supposed to directly access the file. Hence, now I need some way by which I can access the partitions using something like HCatalog.
|
How to read a hive partition into an Apache Crunch pipeline?
|
All registers are read in the ID stage, thus there are no hazards in trying to read registers.Thisdoesmean that some instructions will have to stall in ID if the registers they want to read aren't finished yet. That's where "bypassing" can help.
|
Consider the following execution of instructions in a 5-stage pipeline
(IF - ID - EX - MEM - WB)
where "SD N(R2), R1" means store data from register R1 to memory position M[N+R2], "ADD R3, R1, R2" performs the operation R1 + R2 and stores the result in R3, and NOP is a bubble.For what I understand registers are read on the ID stage.So, if I have the following instructions:I1: SD 0(R2), R6
NOP
I2: ADD R3, R1, R2then the execution goes as following (I hope it looks clear)R2 is read
^ Store M[0+R2] <- R6
^ ^
I1: | IF | ID | EX | MEM | WB |
NOP: |////|////|////|////|////|
I2: | IF | ID | EX | MEM | WB |
v
R2 is readIs there a hazard on the 4th cycle when I1 is on the MEM stage and I2 on the ID stage because both instructions want to access R2 at the same time?
Or is there no hazard since R2 is only read on the ID stages and therefore it is not accessed on the MEM stage?
|
Pipeline hazard handling with store
|
I think this is the as designed behavior for cmdlets that take pipeline input. When input is pipelined, each object is treated individually, but if you pass the collection as the InputObject the collection is treated as a single object.You can try it withSelect-Object. There is a distinct difference betweenps | Select -First 1andSelect -First 1 -InputObject (ps)In your case, pipelining causes each object to run through the out-string processing step which prints a header for it. However, when you use the InputObject parameter the collection is sent to Out-String resulting in a single header.To make your function consistent with other cmdlets and functions you should design your process step to handle a single item from the pipelined input. Otherwise, be prepared to handle the case where you get a collection as single object.There is an openconnect issueon this, but I doubt it will get fixed as it is the as designed behavior, I believe. It looks like in newer versions of PowerShell they are just clarifying the actual behavior in the help. From the help forSelect-Objectin PowerShell 4:When you use the InputObject parameter with Select-Object, instead of piping command results to Select-Object, the InputObject value-even if the value is a collection that is the result of a command, such as -InputObject (Get-Process)-is treated as a single object.Also, typically when you have InputObject, its type is justPSObject, not an array.
|
I have a PowerShell script that takes pipeline input, processes each item, and then callsWrite-Hoston each one.When I call the script using parameter input, myforeachloop writes out a single header row, and then the data below it. When called via the pipeline, I get one header row and one data row for each row of input. I guess PowerShell has some code that figures out when you're doing aWrite-Hostin aforeachloop and only writes the header once in that case.So my question is, how should I be going about this so that the behavior is consistent between both forms of input? I'm sure I'm doing it wrong, but I don't know what the right way is.Here's my script.param (
[Parameter(Position=0, Mandatory=$true, ValueFromPipeline=$true)]
[PSObject[]] $InputObject
)
process {
$lines = ($InputObject | Out-String) -replace "`r", "" -split "`n"
foreach ($line in $lines) {
#Process $line here
Write-Host $line
}
}And here are two sample outputsMyScript.ps1 $(ps | Select-Object -First 10)$(ps | Select-Object -First 10) | MyScript.ps1
|
How should I use Write-Host so that pipeline and parameter input behave the same way?
|
Assuming a 4 frame lag is acceptable, you could use a pool of list nodes, each with a pointer to a frame buffer and a pointer to the intermediate values (a NULL pointer could be used to indicate end of a stream). Each thread would have it's own list as part of multithread messaging system. The first thread would get a frame node from a free pool, do it's processing and send the node to the next threads list, and so on, with the last thread returning nodes back to the free pool.Here is a link to an example file copy program that spawns a thread to do the writes. It uses Windows threading, mutexes, and semaphores in the messaging functions, but the messaging functions are simple and could be changed internally to use generic equivalents without changing their interface. The main() function could be changed to use generic threading and setup of the mutexes and semaphores or something equivalent.mtcopy.zip
|
Sorry if this has been asked, I did my best to search for a dup before asking...I am implementing a video processing application that is supposed to run real time. The processing it does can be easily divided into 4 stages, each one operating on the intermediate values generated from the previous stage. The processing has become heavier than what can be processed in 1/30th of a second, but if I can split this application in 4 threads and turn it into a pipeline, each stage takes less than that and the whole thing would run realtime (with a 4 frame lag, which is completely acceptable).I'm fairly new to multithreading programming, and the problem I'm having is, I can't find a mechanism to start/stop each thread at the beginning of each frame, so they all march together, delivering one finished frame every "cycle" at the end. All the frameworks/libraries I found seem to worry about load balancing using queues and worker threads, but this is not what I need here. Four threads will do, assuming I can keep them synced.Can anybody point me to a starting point, using C++?Thanks.
|
C++ synced stages multi thread pipeline
|
Assuming you switch in the sensible way (withBLX/BX LR),anymoderncorewillpredictthat(assuming the branch predictor isn't turned off, of course). Writing to thePCdirectly is a little more variable - generally, big cores might predict it but little cores won't - but is generally best avoided anyway.Otherwise an interworking branch is AFAIK no different to a regular branch, so if it isn't predicted the penalty is just a pipeline flush. The only other way to switch instruction sets is via an exception return, which is a synchronising operation for the whole core (i.e. not the place to be worrying about performance).
|
In ARM architecture, if an ARM to Thumb mode switch occurs, will a pipeline stall occur?
If so, how many cycles are affected?
Is this same for Thumb to ARM mode switching ?
Does this behavior vary with different ARM processors ?
|
Does a pipeline stall occur on an ARM to Thumb switch?
|
You can use Jenkins plugin "Build Flow Plugin" to run your jobs in parallel.
In that case your final job will be executed after completion of parallel jobs.
|
i have 2 pipelines in jenkins and i need to run a final job if last 2 jobs in 2 pipelines are successfull.job 1 ( which will build periodically at 7PM ) will call 2 jobs job_pipeline1_1 and job_pipeline2_1.job1job_pipeline1_1 -- job_pipeline1_2job_pipeline2_1 -- job_pipeline2_2job_final (should be called only after job_pipeline1_2, job_pipeline2_2 are successfull)job_pipeline1_1 and job_pipeline1_2 are independent of job_pipeline2_1 and job_pipeline2_2 and will run on differnt servers.job_final should be called only if job_pipeline1_2 and job_pipeline2_2 are successfull in that particular build.job_final should be in the pipeline.check this image "https://i.stack.imgur.com/58Upc.png"Can any one help me in this regard?
Thanks in advance.
|
Jenkins running a job if 2 jobs in 2 different pipelines are successfull
|
With GStreamer you can run 2 pipelines in the same process without having to worry about threading as it's already handled internally.void
start (GError **error) {
GstElement *pipe1;
GstElement *pipe2;
*error = NULL;
pipe1 = gst_parse_launch ("src ! enc ! mux ! sink", error);
if (*error != NULL)
return;
pipe2 = gst_parse_launch ("src ! demux ! dec ! sink", error);
if (*error != NULL)
return;
gst_element_set_state (pipe1, GST_STATE_PLAYING);
gst_element_set_state (pipe2, GST_STATE_PLAYING);
}
|
I'm reading the Android tutorial togstreamer. I'd like to make a simple pipeline from
one android phone to another,like this.
I've read these questions:loading same gstreamer elements multiple times in a process,
andJNI - multi threads, but they didn't help me resolve my current issue.I'd like to make two processes to make the android phone send
and receive audio!On Linux I would usefork(), like this:p = fork();
if p==0{
//pipeline1
}
else {
//pipeline2
}But this doesn't work on Android, I get this error:{
g_source_set_callback: assertion `source != NULL' failed
Fatal signal 11 (SIGSEGV) at 0x00000010 (code=1)
}How can I resolve this?
|
Use multiple processes in Android gstreamer!!
|
Thevipecommand from moreutils needs to solve a similar problem and uses this technique:It writes everything from STDIN to a temporary filemy ($fh, $tmp)=tempfile();
print ($fh <STDIN>) || die "write temp: $!";
close STDIN;It redirects STDIN and STDOUT to/dev/ttyopen(STDIN, "</dev/tty") || die "reopen stdin: $!";
open(OUT, ">&STDOUT") || die "save stdout: $!";
close STDOUT;
open(STDOUT, ">/dev/tty") || die "reopen stdout: $!";Launches$EDITOR
|
I have a scriptmyscriptthat collects some information, put them in a temporary file, launches$EDITORon that file and waits for the user to be done. Something similar to what happens withgit commitwhen it opens$EDITORto let you enter a commit message.Basicallymyscriptissalt=$(collect_salt_from_various_sources)
password=$(openssl passwd -salt $salt $$)
tempfile=$(mktemp)
printf "username=CHOOSE A NAME\npassword=$password\n" > $tempfile
$EDITOR $tempfile
# read data from $tempfileI would like to use my script in a pipeline, so to receive information from stdin:# "aabbcc" will be used as part of the salt
echo "aabbcc" | myscriptThe problem here is that command-line editors (e.g., Vim, Nano) cannot access the terminal any longer and the user is not able to type (almost) anything.Is there a way to launch$EDITORfrom a pipelined script so that command line editors still work?
|
Launching an editor from a pipelined command
|
Hmm, using pipeline input doesn't work but it does work if you invoke the function not using pipeline input:function Test-Function
{
[CmdLetBinding()]
Param (
[int]$MaxRetrycount = 3,
[Parameter(ValueFromPipeline=$True)] [String]$Definition
)
return $MaxRetrycount
}
workflow Test-Workflow
{
$PSComputerName
$data = 'abc','xyz'
foreach ($d in $data) {
Test-Function -MaxRetrycount 2 -Definition $d
}
$JobName
}
Test-WorkflowOutputs:Test-Workflow
2
Job37
|
I have following powershell script:function Testing
{
[CmdLetBinding()]
Param (
[int]$MaxRetrycount = 3,
[Parameter(ValueFromPipeline=$True)] [String]$Definition
)
return $MaxRetrycount
}
workflow Test-Workflow
{
$PSComputerName
$data = 'abc','xyz'
$data | Testing -MaxRetrycount 2 -Definition
$JobName
}
Test-WorkflowBut execution of this script giving me error likeThe 'Testing' activity is not supported in a workflow pipeline.Did I make any mistake in calling function with command pipeline from workflow?Thanks in advance.
|
Workflow with pipeline
|
you don't have to loop onstart_urls, scrapy is doing something like this:for url in spider.start_urls:
request url and call spider.parse() with its responseso your parse function should look something like:def parse(self, response):
hxs = HtmlXPathSelector(response)
item = PrivacyItem()
item['desc'] = hxs.select('//body//p/text()').extract()
item['title'] = hxs.select('//title/text()').extract()
return itemalso try to avoid returning lists as item fields, do something like:hxs.select('..').extract()[0]
|
I have a spider which reads a list of urls from a text file and saves the title and body text from each. The crawl works but the data does not get saved to csv. I set up a pipeline to save to csv because the normal -o option did not work for me. I did change the settings.py for piepline. Any help with this would be greatly appreciated.
The code is as follows:Items.pyfrom scrapy.item import Item, Field
class PrivacyItem(Item):
# define the fields for your item here like:
# name = Field()
title = Field()
desc = Field()PrivacySpider.pyfrom scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.selector import HtmlXPathSelector
from privacy.items import PrivacyItem
class PrivacySpider(CrawlSpider):
name = "privacy"
f = open("urls.txt")
start_urls = [url.strip() for url in f.readlines()]
f.close()
def parse(self, response):
hxs = HtmlXPathSelector(response)
items =[]
for url in start_urls:
item = PrivacyItem()
item['desc'] = hxs.select('//body//p/text()').extract()
item['title'] = hxs.select('//title/text()').extract()
items.append(item)
return itemsPipelines.pyimport csv
class CSVWriterPipeline(object):
def __init__(self):
self.csvwriter = csv.writer(open('CONTENT.csv', 'wb'))
def process_item(self, item, spider):
self.csvwriter.writerow([item['title'][0], item['desc'][0]])
return item
|
Scrapy spider not saving to csv
|
Does this work?Get-ChildItem .\ -Include bin,obj,Debug,ipch,Resources -Exclude "*.png","*.bmp","*.jpg","*.htm*","*.xml","*.fl*","*.css" -Recurse | foreach { Remove-Item $_.fullname -Force -Recurse; WriteToLog $msg -STATUS 'INFORMATION' }
|
I have a powershell script that I use to clean up a build area before fetching files from SVN again. I want to improve logging by keeping a record of what was cleaned up.Get-ChildItem .\ _
-Include bin,obj,Debug,ipch,Resources _
-Exclude "*.png","*.bmp","*.jpg","*.htm*","*.xml","*.fl*","*.css" _
-Recurse _
| foreach ($_) { Remove-Item $_.fullname -Force -Recurse}What I would would like to insert here is some type of Format-Table for the foreach process to get the fullname property to output to $bldLog. I have a function that formats a message for the build log with a date time stamp so I just call WriteToLog $msg -STATUS 'INFORMATION'I have been trying to wrack my brain around this for a couple of days now to get both writeToLog and to get Remove-Item in a pipleLine but without success. Is such a process possible or do I just forget the pipeline and go oldschool?
|
Powershell pipeline both remove-item and call function
|
Just change your command string to "filesrc location=" + path + " ! decodebin2 ! gconfaudiosink", that should work.On a side note you should use the gst-launch tool on the command line to check if your pipeline is working and to debug it. Also use gst-inspect to find which plugins are available on your system and what is their functionality.
|
I've been struggling with GStreamer for a while because I can't find any C# examples/tutorials.As far as I know, Gstreamer uses pipelines in order to decode and then be able to send, for instance a song, to the speakers, but I tried the following, which didn't work:Gst.Element pipeline;
string path = @"some_path.mp3";
string command = "filesrc location=" + path + " ! oggdemux ! vorbisdec ! audioconvert ! gconfaudiosink";
pipeline = Gst.Parse.Launch(command);
pipeline.SetState(Gst.State.Playing);However, it raises an exception in the Gst.Parse.Launch lineDoes anyone know any good application example, and/or can actually post some code, so I can start getting used to the library? Also, if you can tell me what's wrong on the code above, I'd be thankfulWithout further ado,
Regards
|
Playing Audio with pipeline in Gstreamer (C#)
|
First you should know the basics of the cache. You can imagine cache as an intermediate memory which sits between the DRAM or main memory and your processor, however very much limited in size. Now, when you try to access a location in memory, you will search it first in the cache. If it is found (cache hit) the processor will take this data and resume the execution. Generally the cache hit is supposed to be very few clock cycles lets say 1 or 2. Suppose if the data is not found in the cache (cache miss), then the data is fetched from the main memory, filled in the cache and fed to the processor. The processor blocks till the data is fetched. This takes few hundreds of clock cycles normally depending on the DRAM you are using. The amount of data that is fetched from DRAM is equal to cacheline size. For that you should search for spatial locality of reference in caches.I think this should get you a start.
|
I am trying to complete a simulator based for a simplified mips computer using java. I believe I have completed the pipeline logic needed for my assignment but I am having a hard time understanding what the instruction and data caches are supposed to do.The instruction cache should be direct-mapped with 4 blocks and the block size is 4 words.So I am really confused on what the cache is doing. Is it going to memory and pulling the instruction from memory? For example, in one block it will have just the add command.Would it make sense to implement it as a 2 dimensional array?
|
Instruction cache for pipelined simulator
|
I am not sure where you have put your image_key class but this code below is working fine for meclass MyImagesPipeline(ImagesPipeline):
#Name download version
def image_key(self, url):
image_guid = url.split('/')[-1]
return 'full/%s' % (image_guid)
def get_media_requests(self, item, info):
|
I would like to change the file names of downloaded images from the hash value it get's now to the image alt tag or something similar.from scrapy.http import Request
from scrapy.contrib.pipeline.images import ImagesPipeline
from scrapy.exceptions import DropItem
from scrapy.http import Request
class DocosPipeline(object):
def process_item(self, item, spider):
return item
class DocosImagesPipeline(ImagesPipeline):
def get_media_requests(self, item, info):
for image_url in item['image_urls']:
yield Request(image_url)
def item_completed(self, results, item, info):
image_paths = [x['path'] for ok, x in results if ok]
if not image_paths:
raise DropItem("Item contains no images")
item['image_paths'] = image_paths
return itemI've tried overriding the image_key class, but I can't seem to get it right. Here's the class:def image_key(self, url):
image_guid = hashlib.sha1(url).hexdigest()
return 'full/%s.jpg' % (image_guid)I'm really stuck here any help would be greatly appreciated.
|
In scrapy 0.16 how do I change the file names of images downloaded via the images-pipeline?
|
Ignoring the time taken by flushes on branches/jumps:Average no of cycles taken by an instruction in the given instruction mix =(0.23)*5 + (0.12)*4 + (0.12)*4 + (0.08)*4 + (0.45)*4 = 4.23 clock cycles(Load takes 5 cycles, Store: 4, R: 4, Jump/Branch: 4)Now,No of cycles taken by 1 instruction at an average = 4.23=> No of cycles taken by 100 instructions at an average = 423Clock frequency = 1.2Ghz=> Time taken for 1 cycle = 8.33 * 10^-10=> Time taken for 423 cycles = 3.5236 * 10^-7 = Ans
|
Suppose if M5 is afive-stage pipelined implementation.I know that the five-stage pipeline has following steps:IF -- instruction fetch (and PC update)
ID -- instruction decode (and get operands from registers)
EX -- ALU operation (can be effective address calculation)
MA -- memory access
WB -- write back (results written to register(s))If suppose there are 100 MIPS instructions with the following
instruction mix:Loads 23%, Stores 12%, Conditional Branches 12%, Jumps
8% and R-type instructions 45%.
The CPU clock frequency is 1.2 GHzI am trying to calculate the time to execute the 100 instructions. I understand how to calculate the time for a non-pipeline using this formulaExTime = Instruction count * CPI * Clock period in secondsI convert the frequency to period using1/f = 8.33 * 10^-10 secondsBut I am unsure of a way to calculate the execution time for this pipeline and do I need to know the cycles of the pipeline implementation?Please help me out as I can't find a decent example online.
ThanksEDITI think I have found the answer!I found some information thatINSTRUCTION LATENCY = 5 time units THEREFORE
INSTRUCTION THROUGHPUT = 5 * (1 / 5) = 1 instruction per time unit
So in this case it would be:
ExTime in seconds = Number of instructions * clock cycle period in seconds
|
Trying to calculate the time to execute instructions of a five-stage Pipeline processor
|
Okay now,
since this command is working (usually):gst-launch playbin uri="rtsp://127.0.0.1:8554/live"I decided that there can't be compatibility problem!
and the problem was solved by using 'rtspdecodebin' instead of 'rtspsrc' and 'decodebin'So finally I modified that in rtsp.py::return ("uridecodebin name=d uri=%s ! queue "
" ! %s ffmpegcolorspace ! video/x-raw-yuv "
" ! videorate ! video/x-raw-yuv,framerate=%d/%d ! "
" @feeder:video@ %s ! @feeder:audio@"
% (location, scaling_template, framerate[0],
framerate[1], audio_template))Now it works! (most of the times) and it's probably something with the stream or QoS ...
|
I'm working on a streaming project.I have VLC running as a server, streaming mp4 (h264/aac) RTSP stream to Flumotion server (which is based on Gstreamer).I think it's either a compatibility problem between VLC (which is based on Live555) and Flumotion (which is based on GStreamer) or the pipeline used to receive RTSP stream is mis-written.Here's the pipeline used by flumotion and needs to be fixed (rtsp.py lines 44-49):return ("rtspsrc name=src location=%s ! decodebin name=d ! queue "
" ! %s ffmpegcolorspace ! video/x-raw-yuv "
" ! videorate ! video/x-raw-yuv,framerate=%d/%d ! "
" @feeder:video@ %s ! @feeder:audio@"
% (location, scaling_template, framerate[0],
framerate[1], audio_template))Edit:
The problem is that RTSP-Producer component in flumotion can't recieve any data from the VLC stream. no errors, nothing, it just keeps in 'waking' status.I tried some variations of GStreamer pipeline used by flumotion but couldn't get it to work.I found many similar unsolved questions in StackOverflow which made me think it's a compatibility issueI'm not a gst-pipeliner ! so please help me out of this struggle.
|
VLC RTSP compatibility with GStreamer
|
There's no particular difficulty in doing this and it's common practice.To generate instructions likexsl:value-ofwithout them being interpreted as instructions, usexsl:namespace-alias. There's an example in the XSLT specification itself.
|
I am trying to use theresultof an java xsl transform (from XMLSource1.xml and StyleSheet1.xsl) as astylesheetfor another transform (from XMLSource2) and then output the result.I read an interesting article aboutchaining transformations(also describedherebut what I am trying to achieve is slightly different because the result of the first transformation will not be the source for the second one but thestylesheetthat should be apply to another transformation.How could I achieve this?
|
Use Result of java XSL transformation as a Stylesheet of a subsequent transformation
|
Total_execution_time = (1+stall_cycle*stall_frequency)*exec_time_each_inst
exec_time_each_inst = 1s [i.e @1ghz need to execute 10^9 inst => 1 inst = 1 sec]
stall_frequency = 20% = .20
stall_cycle = 2[i.e in 3rd stage of pipeline we know branch result, so there will be 2 stall cycles]therefore Total_execution_time = (1+2*.20)*1 = 1.4 secondsI don't know how to explain it better but hope it helps a bit :)
|
A CPU has a five-stage pipeline and runs at 1 GHz frequency. Instruction fetch
happens in the first stage of the pipeline. A conditional branch instruction
computes the target address and evaluates the condition in the third stage of the
pipeline. The processor stops fetching new instructions following a conditional
branch until the branch outcome is known. A program executes 10^9 instructions
out of which 20% are conditional branches. If each instruction takes one cycle to
complete on average, the total execution time of the program is:(A) 1.0 second(B) 1.2 seconds(C) 1.4 seconds(D) 1.6 seconds
|
Total execution time of a program with conditional branches in a five-stage pipeline
|
Download Calabash(a FLOSS XProc processor) and run it on your pipeline.
|
Am very new to XSLT and XPROC stuffs. Now i have written my sample XPROC, As like every beginner, i also started withHello World.hello.xpl<?xml version="1.0" encoding="UTF-8"?>
<p:pipeline xmlns:c="http://www.w3.org/ns/xproc-step" xmlns:p="http://www.w3.org/ns/xproc" name="pipeline">
<p:identity>
<p:input port="source">
<p:inline>
<p>Hello world!</p>
</p:inline>
</p:input>
</p:identity>
</p:pipeline>Now my question may be silly, I want to know how to execute this and view the output?Thanks.
|
Executing XPROC
|
For me it looks like a simple table with data. You can always later on dynamically render the images on the column 1 and 2 with C# using System.Drawing. Using a charting control for drawing 2 simple images sounds like an overkill while with C# you can easily do what you need and present it to client using the web standards with no plugins.To overlay 2 images and write some text on it:string graphPath = Server.MapPath("graph.png");
string iconPath = Server.MapPath("icon.png");
// Prepare the template image
System.Drawing.Image template = Bitmap.FromFile(graphPath);
Graphics gfx = Graphics.FromImage(template);
// Draw an icon on it on x=70, y=70
Bitmap icon = new Bitmap(iconPath);
gfx.DrawImage(icon, new Point(70, 70));
// Draw a string on it on x=150, y=150
Font font = new Font("Arial", 12.0f);
gfx.DrawString("11/14/2009", font, Brushes.Black, new Point(150, 150));
// Output the resulting image
Response.ContentType = "image/png";
template.Save(Response.OutputStream, ImageFormat.Png);See, you end up coding very little and you don't surrender yourself playing by the rules dictated by a charting control.
|
I'm looking for a chart control that is able to display data like this:But it must not be Silverlight, Flash or other technology forcing user to install plugin.It would be best it the control could use HTML5, JavaScript, C#
|
Pipeline chart control
|
As per MSDN documentation, correct way is to useInitialiseCulture- it gets called very early in page life-cycle before even controls are created. And that is even beforePreInitevent.Said that, people have set the culture information as late asPage_Loadevent. For example, seethis KB articleor thiscode project article. So I guess that PreInit event should be ok.There are two relevant properties - Culture and UICulture. AFAIK, UICulture is used for loading correct local (page specific)/global resources and that would be done at rendering stage - so should not be an issue. The culture info from thread get used by many frameworks methods and you need to be careful using any code that is dependent on the culture information before you sets the culture in page life cycle - example of such code can be formatting of data or parsing from request data etc.
|
What is the last point in the asp.net page loading pipeline that I can change the culture of a page by doing the following?Thread.CurrentThread.CurrentCulture = << new culture >>;
Thread.CurrentThread.CurrentUICulture = << new culture >>;I am changing the culture in my code and want to know at what point is the last point in which I can change the pages culture so that the correct resource files etc are picked up.Is the Page PreInit too late in the pipeline to change the culture? I am aware there is an InitialiseCulture method in the Page class but I am working outside of this.
|
last point in page pipeline to change culture?
|
I think this is a good practice to cut down on people who just want to poke around on your site for security holes. I mean if your student id's are sequential numbers and people are able to see the id for their page in the query string it is pretty easy to just increase the number to see if you can access the next item on the list, or build a script to iterate through all of the numbers.Ideally you would want some security to prevent people from getting to those pages anyway if you don't want them to have access. But even if the information was all public it could prevent people from writing a script to iterate through all of the information.We actually do this in our product because even though we have security for data items it is up to the administrators of the site to make sure that the security is applied properly so we encrypt and decrypt keys that are exposed in the url to make it a bit safer in case the administrators don't know what they are doing and leave things open that should be locked down.I like this extension method for easy encrypt/decrypt:http://www.extensionmethod.net/Details.aspx?ID=69You will need to make sure to url encode the encrypted values because they are not always url friendly when they are generated. You can also expect to have some ugly urls like a 5 character key will encrypt to around 14 characters of random looking stuff.
|
I just came across some code that seems to encrypt database keys prior to sending them to the client (WebBrowser, Silverlight, etc).To illustrate, suppose you have a list of students to extra-curricular activities, and a relationship defined between them. Every time the data is written out to the ASPX page, the studentID and activityID is encrypted. Every time a write, or modify is made, this value is sent back to the server, decrypted, and saved to the database.What could be the reasons to expose data this way? Is this a normal practice?If this selective encryption is a good practice, what are the best ways of approaching it?
|
Encrypting foreign keys from database prior to sending to browser
|
import pandas as pdCreate a DataFramedf = pd.DataFrame('your data')Define a function to apply the conditionsdef create_campaign(row):
if row['sales'] == 500 and row['region'] == 'North':
return 'Y'
else:
return 'N'Apply the function to create the 'campaign' columndf['campaign'] = df.apply(create_campaign, axis=1)Print the resulting DataFrameprint(df)
|
I
Appreciate help in my learning process in the Pipeline Builder with screenshots or videos.I want to create a new column name, “campaign”
If sales is 500$ and region is North then the campaign column result will
Be “Y” else “N”
End
|
How to create a new column by an if statement in a pipeline builder where I need to use two different columns?
|
-1I did it by another way for people that want to do it.
It's some tricky thing but i did a template that rendered ci variable, and include it with an artifact for the trigger pipeline child. That job trigger the remote pipeline with the dynamic path.
|
I have a question about triggering a child pipeline.In my case I generate multiple projects with each project having a different pipeline. On my main build pipeline I need to launch the child pipeline dynamically with the new project created by the same pipeline (so the project name is different each time).My purpose here is to fill in the project with a dynamic path. It is working with a static one, but not with contain variables one that stay stuck in pending status.For example:Static one:deploy:
stage: init_project
trigger:
project: my/project
branch: master
strategy: dependDynamic one:deploy:
stage: init_project
trigger:
project: $my/$project
branch: master
strategy: dependIs there is a solution or something wrong that I did here?
|
Gitlab ci Multiple trigger pipeline dynamic project name
|
Thepipelineitself does not care about the format fory, it just hands it over to each step. In your case, it's theLogisticRegression, which indeed is not set up for multi-label classification. You can manage it using theMultiOutputClassificationwrapper:pipe_lr = make_pipeline(
StandardScaler(),
PCA(n_components=2),
MultiOutputClassifier(LogisticRegression(random_state=1, solver='lbfgs'))
)(There is also aMultiOutputRegressor, and more-complicated things likeClassifierChainandRegressorChain. See theUser Guide. However, there is not to my knowledge a builtin way to mix and match regression and classification tasks.)
|
my dataset consist of 10 feature (10 columns) for input and the last 3 columns for 3 different output. If I use one column for output, for example y = newDf.iloc[:, 10].values , it works; but if I use all 3 columns it gives me an error at pipe_lr.fit and says: y should be a 1d array, got an array of shape (852, 3) instead.
How can I pass y ?X = newDf.iloc[:, 0:10].values
y = newDf.iloc[:, 10:13].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1)
pipe_lr = make_pipeline(StandardScaler(),
PCA(n_components=2),
LogisticRegression(random_state=1, solver='lbfgs'))
pipe_lr.fit(X_train, y_train)
|
Can a pipeline works for more than one class?
|
Downgrading sklearn to 0.23.2 solved the problem for me.
|
I am trying to convert my sckit-learn model to PMML 4.3 using sklearn2pmml. I did it successfully for the version 4.4 however I need the PMML 4.3 and therefore I decided to install the version 0.56 for sklearn2pmml using the following commands:pip install sklearn2pmml=0.56However now when I try to create a pipeline I receive the following error:Pipe_PMML = PMMLPipeline([('scaler', StandardScaler()),('classifier', LogisticRegression())])AttributeError: 'PMMLPipeline' object has no attribute 'apply_transformer'The newest version of sklearn2pmml is 0.76 and solve this issue but generates the PMML version 4.4. I was wondering how the previous versions used to convert the ML models in Python to PMML. Is there any way to downgrade the 4.4 version to older version?
|
Downgrading PMML 4.4 to 4.3 version
|
-1The other answer for this question is achieved with below code:plan:
- get: test-version-resource
- task: send-version
config:
platform: linux
image_resource:
type: docker-image
source:
repository: gliderlabs/alpine
tag: latest
inputs:
- name: test-version-resource
run:
path: /bin/sh
args:
- -c
- |
ls -la test-version-resource
|
After issuing aflycommand to a Concourse pipeline, I would like to version this new pipeline.I have tried maintain a separate `version' file on GIT repo.But my requirement is to display this version on pipeline job name.Please see the image where a version has to be appendedAdding more details:I am looking at facility in GIT to watch for commits on a particular folder (sayxyz). I got something similar the below:Is there a tool to watch a remote Git repository on Ubuntu and do popup notifications when commits are made?https://github.com/jakeonrails/git-notify
|
How can I add versioning to concourse pipeline?
|
There's no way to distinguishecho -e 'first\nsecond'
echo -e 'first\nsecond'
echo -e 'first\nsecond'fromecho -e 'first\nsecond\nfirst'
echo -e 'second\nfirst\nsecond'Both output the following stream of bytes (represented as hex):66 69 72 73 74 0A 73 65 63 6F 6E 64 0A 66 69 72 73 74 0A 73 65 63 6F 6E 64 0A 66 69 72 73 74 0A 73 65 63 6F 6E 64 0AThe stream doesn't contain any information about what these bytes mean or how they were assembled. At best, there could be timing differences.
|
I am running a script input.sh which has multiple multi-line outputs like so:echo -e 'first \n second'
echo -e 'first \n second'
echo -e 'first \n second'I don't have control over this file, and all I can know is that it will have multiple multi-line outputs.I need to be able to conduct operations on each individual output from that file in real time as it outputs messages. Buffering is one issue, but not the one I'm asking about here.I'm simplifying a little, but my problem boils down to this: I want to insert a kangaroo at the end of each individual output. See my attempts below:./input.sh | sed 's/$/kangaroo/'This version above inserts a kangaroo after every newline, not each multiline output../input.sh | perl -0777 -pe 's/$/kangaroo/'This perl version only inserts a kangaroo after all outputs have finished (one kangaroo total, instead of one kangaroo per output.)I have tried other variants but it's always one or the other-- a kangaroo after every new line, or a single kangaroo after everything. I tried usingtrto replace new lines with form feeds, but that didn't make any difference.How can this be done?By the way, I've read throughthis questionand its answers carefully, but they are discussing operating on a file. I was unable to apply the principles described there to a pipeline and reading from stdin.
|
Is there any way to tell apart multiple multi-line outputs from a script?
|
You can write a simple factory function or class to build a pipeline function:def pipeline(*functions):
def _pipeline(arg):
result = arg
for func in functions:
result = func(result)
return result
return _pipelineTest:>>> rec = Record(" hell o")
>>> rec.apply_func(pipeline(func1, func2))
'[hello]'This is a more refined version written with reference tothisusingfunctools.reduce:from functools import reduce
def pipeline(*functions):
return lambda initial: reduce(lambda arg, func: func(arg), functions, initial)I didn't test it, but according to my intuition, each loop will call the function one more time at the python level, so the performance may not be as good as the loop implementation.
|
This question already has answers here:Better way to call a chain of functions in python?(4 answers)Closed3 months ago.I have several string processing functions like:def func1(s):
return re.sub(r'\s', "", s)
def func2(s):
return f"[{s}]"
...I want to combine them into one pipeline function:my_pipeline(), so that I can use it as an argument, for example:class Record:
def __init__(self, s):
self.name = s
def apply_func(self, func):
return func(self.name)
rec = Record(" hell o")
output = rec.apply_func(my_pipeline)
# output = "[hello]"The goal is to usemy_pipelineas an argument, otherwise I need to call these functions one by one.Thank you.
|
Python chain several functions into one [duplicate]
|
The-Idparameter accepts pipeline input by property name, so you'd have to add another property with the proper name containing the PID. While possible, I'd usually just use the direct route:Get-NetTCPConnection | ForEach-Object { Get-Process -Id $_.OwningProcess }
|
The specific use case is:Get-NetTCPConnection -State Listen -LocalPort 6005 |
Get-Process -PID ???Where???is theOwningProcessproperty of the output from the first cmdlet.
|
How do I pipe a property of a PowerShell cmdlet output into an input of a different cmdlet?
|
Site context is a great way to do this. You can use standard Sitecore configuration factory approaches to feed in the site names that should utilize the extension, then ignore others, including "shell."c.f.https://community.sitecore.net/technical_blogs/b/sitecorejohn_blog/posts/the-sitecore-asp-net-cms-configuration-factory
|
I have a pipeline processor which works as expected on the site, but causes havoc when you go in to the CMS.What is the proper way to determine if the request is to the CMS?
Preferably I'd like something a little more robust than checking to see if the URL contains "/sitecore/".
|
Sitecore pipeline processor messing up the CMS
|
You shouldn't let lazy loading occur in your view pages. That means the view accesses data which breaks the entire point of MVC.Instead you should get the entirety of the data in your controller and then pass that to your view.
|
I am porting an asp.net webforms application to mvc.net. I have an OR framework that requires a DataSession object to be created before any database operations can be performed.In my current webform application I instantiate the DataSession during the Page_Init event and during the Page_UnLoad event I clear the object.I am looking for something similar with mvc.net. I have initially started with using the OnACtionExecuting (raised before an action) and OnActionExecuted (raised after the action). However, during the rendering of the page there is some lazy loading of entities that fail as the DataSession is no longer available. What I need is something that will fire after the View has been rendered.
|
with mvc.net is there an event that is raised after the view is rendered
|
There are two types of Pipeline syntax.Declarative PipelineandScripted Pipeline. A declarative pipeline starts with apipeline {}wrapper and will haveStagesandSteps. Declarative pipeline limits what is available to the user with a more strict and pre-defined structure. Where in scripted Pipeline it's more closer to groovy, and users will have more flexibility on what they can do. When you run something in aScriptblock in a declarative Pipeline, The script step takes a block of theScripted Pipelineand executes that in the Declarative Pipeline. Basically, it runs aGroovyscript for you. So your question can be rephrased as whatdefmeans in a Groovy script.Simply in a Groovy script, if you omit adding thedefkeyword the variable will be added to the current script's binding. So it will be considered as a Global variable. If you usedefthe variable will be scoped, and you will only be able to use it in the current script block. There are multiple detailed answers for thishere, so I'm not going to repeat them.
|
I have two Jenkinsfile for sample:
The content of A_Jenkinsfile is:pipeline {
agent any
stages {
stage("first") {
steps {
script {
foo = "bar"
}
sh "echo ${foo}"
}
}
stage("two") {
steps {
sh "echo ${foo}"
}
}
}
}The other one is B_Jenkinsfile and its content is:pipeline {
agent any
stages {
stage("first") {
steps {
script {
def foo = "bar"
}
sh "echo ${foo}"
}
}
stage("two") {
steps {
sh "echo ${foo}"
}
}
}
}When I build them, B_Jenkinsfile is failed and A_Jenkinsfile is success.What is differences between either of using def and without using def in Jenkinsfile in script block?
|
What is differences between either of using def and without using def in Jenkinsfile in script block?
|
Latest Ubunt repos don't contain old Python versions by default.You can either try using a newer Python version or adding thedeadsnakesrepo with something like this:FROM ubuntu:latest
ENV http_proxy $HTTPS_PROXY
ENV https_proxy $HTTPS_PROXY
RUN apt-get install -y software-properties-common && sudo add-apt-repository ppa:deadsnakes/ppa && apt-get update && apt-get install -y \
python3.8 \
python3-pip \
&& rm -rf /var/lib/apt/lists/*You may also need toapt updatebefore installing thesoftware-properties-commonpackage.
|
I am trying to update my CI Pipeline for git lab, but my pipeline keeps on failing because the docker in docker of my runner fails to install python 3.8.In my Docker file I am running the following commandsFROM ubuntu:latest
ENV http_proxy $HTTPS_PROXY
ENV https_proxy $HTTPS_PROXY
RUN apt-get update && apt-get install -y \
python3.8 \
python3-pip \
&& rm -rf /var/lib/apt/lists/*but my pipeline fails giving me the following errorPackage python3.8 is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another sourceE: Package 'python3.8' has no installation candidateerror building image: error building stage: failed to execute command: waiting for process to exit: exit status 100In many suggestions I have found the using the apt-get update command should solve the problem however that is not working for me.
|
Issue with the installation of Python 3.8 with docker file
|
Hello and thank you for the question. According tothe Remove-AzDataFactoryV2Pipeline doc, the-Forceflag simply skips the confirmation prompt. It does not actually 'Force' the deletion in spite of errors.Since you are already doing automation, might I suggest leveraging the error message to recursively attempt to delete the referencing pipeline.$error[0]gets the most recent error.(Pseudocode)try_recurse_delete( pipeline_name )
do_delete(pipeline_name)
if not $error[0].contains("referenced by " + pipeline_name)
then return true
else
try_recurse_delete( get_refrencer_name($error[0]) )Given that pipeline dependencies can be a many-to-many relationship, subsequent pipelines in your for-each loop might already be deleted by the recursion. You will have to adapt your code to react to 'pipeline not found' type errors.
|
I'm actually some automation for my ADF. As a part of that, I'm trying to delete all the ADF V2 pipelines. The problem is my pipelines having many references with different pipelines itself.$ADFPipeline = Get-AzDataFactoryV2Pipeline -DataFactoryName $(datafactory-name) -ResourceGroupName $(rg)
$ADFPipeline | ForEach-Object { Remove-AzDataFactoryV2Pipeline -ResourceGroupName $(rg) -DataFactoryName $(datafactory-name) -Name $_.name -Force }And most of the time I get the error likeThe document cannot be deleted since it is referenced by "blabla"I understand the error that it saying some references and cannot be deleted. However, when I tried the same deletion in the azure portal, irrespective of the reference I can able to delete. So I want to find a way that whether it possible to tell that Powershell even though it's having a reference delete it forcefullyAny other inputs much appreciated!
|
How to delete the ADFPipeline which is having the references Forcefully
|
You can make a file of env var assignments and source that file as need, ie.$ cat myEnvFile
path1=/x/y/z
path2=/w/xy
path3=/r/s/t
otherOpt1="-x"Inside your script you can source with either. myEnvFileor the more versbose version of the same featuresourc myEnvFile(assuming bash shell) , i.e.$cat myScript
#!/bin/bash
. /path/to/myEnvFile
# main logic below
....
# references to defined var
if [[ -d $path2 ]] ; then
cd $path2
else
echo "no pa4h2=$path2 found, can't continue" 1>&1
exit 1
fiBased on how you've described your problem this should work well, and provide a-one-stop-shop for all of your variable settings.IHTH
|
I'm dealing with a pipeline of predominantly shell and Perl files, all of which pass parameters (paths) to the next. I decided it would be better to use a single file to store all the paths and just call that for every file. The issue is I am using awk to grab the files at the beginning of each file, and it's turning out to be a lot of repetition.My question is: I do not know if there is a way to store key-value pairs in a file so shell can natively do something with the key and return the value? It needs to access an external file, because the pipeline uses many scripts and a map in a specific file would result in parameters being passed everywhere. Is there some little quirk I do not know of that performs a map function on an external file?
|
Simple map for pipeline in shell script
|
Please seexargs. For example:myapp | grep '|' | sed -e 's/^[^|]*//' | sed -e 's/|.*//' | xargs -n 1 myscript.sh
|
I have an application (myapp) that gives me a multiline outputresult:abc|myparam1|def
ghi|myparam2|jkl
mno|myparam3|pqr
stu|myparam4|vwxWith grep and sed I can get my parameters as belowmyapp | grep '|' | sed -e 's/^[^|]*//' | sed -e 's/|.*//'But then want these myparamx values as paramaters of a script to be executed for each parameter.myscript.sh myparam1
myscript.sh myparam2
etc.Any help greatly appreciated
|
Use each line of piped output as parameter for script
|
The following should work:int(''.join([str(8-int(y)) for y in str(my_number)]))Example:my_number=1258437620Output:>>> int(''.join([str(8-int(y)) for y in str(my_number)]))
7630451268
|
I am trying to create pipeline that converts a number to its Ones' complement number, for example (8)1234 -> 765483743 -> 05145I tried to create something in this style but I can't figure out how to build the pipeline correctly.int(''.join((my_number(lambda y: 8-y, list(my_number)))))ErrorTypeError: 'int' object is not callable
|
Python: ones complement using join and lambda
|
You can try this approachdata2 <- data %>%
mutate(ID = ifelse(row_number()<= 95, paste0("0", ID), ID))
head(data2)
# ID not.imp1 not.imp2 not.imp3
# 1 09449 -1.4297317 -2.2210106 0.1923912
# 2 07423 1.9010681 1.0825734 -0.8855694
# 3 06283 0.2508254 -0.5307967 2.1645044
# 4 05593 -2.2451267 0.1281156 -1.8528800
# 5 09194 -0.1677409 -0.7422480 -0.4237452
# 6 07270 -0.2536918 1.2289698 1.0083092
tail(data2)
# ID not.imp1 not.imp2 not.imp3
# 95 06538 1.0071791 0.1596557 -0.7099883
# 96 4829 0.2444440 0.8869954 -1.2938356
# 97 2571 -1.1012023 0.8343393 -0.6264487
# 98 150 0.2116460 -0.2146265 -1.8281045
# 99 3107 -1.2379193 0.3491078 1.4531531
# 100 9953 -0.9326725 1.1146032 -1.5542687
|
Considerdatacreated here:data <- data.frame(ID = sample(10000,100), not.imp1 = rnorm(100), not.imp2 = rnorm(100), not.imp3 = rnorm(100))
#Note that not all IDs are the same lengthWe have data for 100IDs, where each individual has a uniqueIDnumber. Columnsnot.imp1:3are only relevant to show the structure of the dataframe.
We want to add a leading zero to the first 95IDnumbers. I am trying to do this usingdplyrpipes, but cant figure out how to add the zeros.
Here is how I subset the data that I want to add the zeros to:library(dplyr)
data%>%
select(ID)%>%
slice(1:95)I have tried several things like adding%>%mutate(paste0("0",.))to the pipe, but havent gotten anything to work. what is the best way to do this?
|
How to add leading zeros to select rows in a data frame column using dplyr pipes
|
for alift and shift option, you can run your python workload on theGoogle Compute Engine, which is a virtual machine, but for best use of Google Cloud, i suggest you to:Spin up aGoogle Compute EngineRun your Python WorkloadSave your data onGoogle Big QueryShutdown your VMSchedule it using theCloud SchedulerHere is a tutorial from Google on how to do it:https://cloud.google.com/scheduler/docs/start-and-stop-compute-engine-instances-on-a-schedule
|
We have a Python data pipeline that run in our server. It grab data from various sources, aggregate and write data to sqlite databases. The daily runtime is just 1 hours and network maybe 100mb at most. What are our options to migrate this to Google Cloud? We would like to have more reliable scheduling, cloud database and better data analytics options from the data (powerful dashboard and visualization) and easy development. Should we go with serverless or server? Is the pricing free for such low usage?
|
How to migrate local data workflow to Google Cloud?
|
For anyone looking for a way to filter logs, if there are multiple services running, you can create new azure build pipeline task (Docker) that runs docker command:docker logs -f NAME_OF_THE_SERVICEThis way you will only see logs from desired service.
|
I am using Azure DevOps build pipeline, to run my selenium web automation tests (run by maven, inside docker container)Now, even If one of my test scenarios fail, pipeline job is still considered as successful, how can I specify to look for certain log to fail it?The 'pipeline job/task/phase' I am using to run my test is 'docker compose'My second question is, is there possibility to filter pipeline output logs? Currently it is flooded with output from few services run in container:The only thing I found is, possibility to search through logs, but no filtering, regards.
|
Azure DevOps pipeline - how to catch error
|
When the code was compiled, the compiler threw a detailed warning explaining what’s wrong with this code:warning:parentheses are required when piping into a function call. For example:foo 1 |> bar 2 |> baz 3is ambiguous and should be written asfoo(1) |> bar(2) |> baz(3)Ambiguous pipe found at:/path/to/file.ex:lineThat said, with pipe operator, it is mandatory to use parentheses around arguments to function calls:directory_contents
|> Enum.map(fn(item) -> Path.join(directory_path, item) end)
|> Enum.filter(&File.dir?/1)Otherwise, the compiler uses wrong operator precedence.
|
I'm trying to write a tool that will take a list of files/folders fromFile.ls, add the full path to them, and filter them based on if they are directories or not.This works:directory_contents = File.ls!(directory_path)
contents_with_full_paths = Enum.map directory_contents, fn(item) -> Path.join(directory_path, item) end
only_dirs = Enum.filter contents_with_full_paths, fn(item) -> File.dir?(item) endThis does not:directory_contents = File.ls!(directory_path)
directory_contents
|> Enum.map fn(item) -> Path.join(directory_path, item) end
|> Enum.filter fn(item) -> File.dir?(item) endIt throws** (Protocol.UndefinedError) protocol Enumerable not implemented for #Function<1.10194857/1 in RepoFinder.subdirectories_in/1>, only anonymous functions of arity 2 are enumerable. This protocol is implemented for: Date.Range, File.Stream, Function, GenEvent.Stream, HashDict, HashSet, IO.Stream, List, Map, MapSet, Range, Stream
(elixir) lib/enum.ex:3213: Enumerable.Function.reduce/3
(elixir) lib/enum.ex:1847: Enum.filter/2Why is that? Shouldn't those two implementations work essentially the same way?
|
Elixir Enum.map piped to Enum.filter not working
|
So you don't even have to write a function, there's a bunch of ways to do that, for example:> aggregate(cut~gem, data=df, mean, na.rm=T)
gem cut
1 Opal 2.500000
2 Ruby 4.333333
3 Topaz 4.000000Or> tapply(df$cut, df$gem, mean, na.rm=T)
Opal Ruby Topaz
2.500000 4.333333 4.000000If you really want to write a function that only gives out one value, then abasepackage one is:> abc<- function(df, column1, val, column2){
+ mean(df[which(df[,column1] == val), column2], na.rm=T)
+ }
> abc(df, "gem", "Ruby", "cut")
[1] 4.333333
|
Hoping to get some help on this
I have a data frame :df<- data.frame(gem = c(Ruby, Opal, Topaz, Ruby, Ruby,Opal),
cut = c(2,3,4,5,6,2))Now the function I am aiming to make is to take subset first i.e. where gem is Ruby and then take mean of cut from that subset.I have tried using following:abc <- function(x,column1,val,coulmn2){
x%>%
subset(column1 %in% val)%>%
mean(na.omit(column2))}
abc(df,gem,"Ruby",cut)This is not working but in above example ideally the answer should be 4.3
|
creating function to subset the data frame and then take mean of particular column in r
|
Bypassing means the data at that stage is passed to the stage required. For example in the first case (MX bypass),
the output of the operationADD r2, r3is available at theMstage, but has not written back to its destinationr1. TheSUBinstruction is expecting one of its data to be available atr1. Since thisr1data is produced by theADDand "we" know that it is this samer1is needed forSUBwe dont need to wait until the writeback stageWofADDis complete. "We" can simply bypass the data to theSUBinstruction. The same goes with WX bypass as well.
|
I am trying to understand the concept of bypassing by reading the following slideBypassing is reading a value from an intermediate source. What does the arrow stand for?, does it mean that X is executed after M in the sequence?. How does it work?
|
Pipelining with bypassing
|
The simple answer, is that they arenotdealt with in the OpenGL pipeline, but must be converted to something that the GL pipeline can process. The general approach would probably be to first convert to a primitive a little more real-time friendly, such as bezier patches, and then tesselate these at runtime into triangles.Tessellation could be regular, mapping a grid onto the patch, or could be based on curvature, subdividing the patch more where there is higher variance. Either way the surface is only truly evaluated at some vertices, and rendered as flat polygons (though shaders can be used to create appropriately smoothly varying normals, etc.)glMap()et-al (which were previously used to help render bezier patches, etc.) are deprecated and no longer present in the modern OpenGL API. Nowadays you would use shaders to deal with tessellation.
|
I'm curious about how NURBS are rendered in GPU's / the OpenGL graphics pipeline. I understand there are various calls within OpenGL and GLUT for easily rendering NURBS objects from a coding perspective using glMap and glMapGrid, but what I don't get is the process OpenGL goes through to do this. The idea behind NURBS is using curves to define surfaces, whereas the graphics pipeline appears to be build around triangle rasterization and triangle meshes, whereas NURBS are based around Bezier Curves, which are curved.So how are NURBS actually rendered, from a (high-level) pipeline perspective?
|
NURBS in the OpenGL Graphics Pipeline
|
Its pretty simple.
Clipping is process that says if primitive (point, line or triangle) is visible. (and is done after modelview* projection matrix transformation) if triangle is partially visible, triangle is split into more triangles that fit in frustum.After clipping is done, we need to normalizevertex(x,y,z,w) coordinates in order to project them to screen (window coordinates). This is called perspective division: new coordinates isx,y,z,1 = x/w, y/w, z/w, 1. Windows coordinates are dependant on viewport settings, and transformation is very simple.window_x = viewport_x + vertex_x * half_viewport_width + half_viewport_width;
window_y = viewport_y + vertex_y * half_viewport_height + half_viewport_height;
|
How does clipping and projecting work in a simplified explanation? It has something to do with normalizing the vertices and matrix multiplication that involves dividing x,y,z by a 4th variable. I am having trouble understanding what actually happens.
|
Clipping Space In OpenGL Pipeline
|
The job is trying to use the imagegcr.io/kaniko-project/executor-- but this image defines anENTRYPOINTthat points to the/kaniko/executorbinary and is therefore not compatible to be used as a job image in GitLab CI. Seethis answer for a full explanationof why this is problematic. Basically gitlab is trying to send commands to the job container, but is unexpectedly calling theexecutorbinary, which results in the error message which comes from/kaniko/executorError: unknown command "sh" for "executor"
Run 'executor --help' for usageBecause there's nothing in your.gitlab-ci.ymlconfiguration that would cause this image to be used, this is likely caused by the configuration of the runner you are using which specifies this as the default image when noimage:key is present.You can specify a different image to get around this problem:image: alpine # as a global key default
build-job:
stage: build
image: alpine # or on a per-job basisIf you really want to use this particular image (you almost certainly do not), you'll need to override the entrypoint:build-job:
stage: build
image:
name: gcr.io/kaniko-project/executor
entrypoint: [""]If you manage the GitLab runner configuration, you should change the defaultimageconfiguration to be a usable image.
|
Right now I am trying to learn CI/CD on GitLab. Unfortunately, I cant make a simple pipeline run on my project. The runner doesn't seem to work somehow.
Here is an image of the project:I wrote this in my .gitlab-ci.yml file:build-job:
stage: build
script:
- echo "Hello, $GITLAB_USER_LOGIN!"
test-job1:
stage: test
script:
- echo "This job tests something"
test-job2:
stage: test
script:
- echo "This job tests something, but takes more time than test-job1."
- echo "After the echo commands complete, it runs the sleep command for 20 seconds"
- echo "which simulates a test that runs 20 seconds longer than test-job1"
- sleep 20
deploy-prod:
stage: deploy
script:
- echo "This job deploys something from the $CI_COMMIT_BRANCH branch."
environment: productionwhich is just the original code from the GitLab Tutorial for creating a simple pipeline.Unfortunately i get this error afterwards:Using docker image sha256:7053f62a27a84985c6ac886fcb5f9fa74090edb46536486f69364e3360f7c9ad for gcr.io/kaniko-project/executor:debug with digest gcr.io/kaniko-project/executor@sha256:fcccd2ab9f3892e33fc7f2e950c8e4fc665e7a4c66f6a9d70b300d7a2103592f ...
Error: unknown command "sh" for "executor"
Run 'executor --help' for usageI'm using the shared runners of GitLab for the pipelines.
Any help would be very appreciated!
|
Any pipeline resolves in Error: unknown command "sh" for "executor" Run 'executor --help' for usage
|
Thewriteoperation will block when the pipe buffer is full, thereadoperation will block when the buffer is empty.When the write end of the pipe is closed, the reading process will get an EOF indication after reading all data from the buffer. Many programs will terminate in this case.When the read end of the pipe is closed, the writing process will get a SIGPIPE. This will also terminate most programs.When you runcat | ls, STDOUT ofcatis connected to STDIN ofls, butlsdoes not read from STDIN. On the system where I checked this,lssimply ignores STDIN and the file descriptor will be closed whenlsterminates.You will see the output ofls, andcatwill be waiting for input.catwill not write anything to STDOUT before it has read enough data from STDIN, so it will not notice that the other end of the pipe has been closed.catwill terminate when it detects EOF on STDIN which can be done by pressing CTRL+D or by redirecting STDIN from/dev/null, or when it gets SIGPIPE after trying to write to the pipe which will happen when you (type something and) press ENTER.You can see the behavior withstrace.catterminates after EOF on input which is shown asread(0, ...)returning 0.strace cat < /dev/null | lscatkilled by SIGPIPE.strace cat < /dev/zero | ls
|
My problem is a bit hard to explain properly as I do not understand fully the behavior behind it.
I have been working on pipe and pipelines in C, and I noticed some behavior that is a bit mysterious to me.Let's take a few example: Let's try to pipe yes with head. (yes | head). Even though I coded the behavior in a custom program, I don't understand how the pipe knows when to stop piping ? It seems two underlying phenomenons are causing this (maybe), the SIGPIPE and/or the internal size a pipe can take. How does the pipe stop piping, is it when it's full ? But the size of a pipe is way superior to 10 "yes\n" no ? And SIGPIPE only works when the end read/write is closed no ?Also let's take another example, for example cat and ls: cat | ls or even cat | cat | ls.
It seems the stdin of the pipe is waiting for input, but how does it know when to stop, i.e. after one input ? What are the mechanism that permits this behavior?Also can anyone provide me with others examples of these very specific behavior if there are any in pipes and pipelines so I can get an good overview of theses mechanism ?In my own implementation, I managed to replicate that behavior using waitpid. However how does the child process itself know when to stop ? Is it command specific ?
|
How can I understand the behavior of pipe, with varying data flow?
|
You'll have to store the output ofdumptemporarily; example with a variable:if out=$(dump "$@" "$db")
then
printf '%s\n' "$out" |
compress |
store "single" "$(backupName "$db")"
else
echo failed 1>&2
exit 1
fior test for the emptiness of stdout (is there's no output when it fails):dump "$@" "$db" | {
IFS='' read -r line
[ -n "${line:+_}" ] || { echo failed 1>&2; exit 1; }
{ printf '%s\n' "$line"; cat; } |
compress |
store "single" "$(backupName "$db")"
}
|
I've been trying different things without successso here is what I want to achieveset -o pipefail
dump "$@" "$db" | compress | store "single" "$(backupName "$db")"
# I would want something that behaves a bit like this
# meaning if it dump fails, don't store
dump "$@" "$db" && {
#migicGetStdout | compress | store "single" "$(backupName "$db")"
} || {
echo failed
}But it creates a empty file on failed dumpI'm lost with pipelineI've tried things likeset -e
set -o pipefail
dump "${dumpCommonArgs[@]}" "${dumpDbArgs[@]}" "$@" "$db" > >(compress | store "single" "$(backupName "$db")")
# or
( compress | store "single" "$(backupName "$db")" ) < <(dump "$@" "$db") || return 2
# or
## this way compress get the global $@ ... I don't understand that either
store "single" "$(backupName "$db")" < <(dump "${dumpCommonArgs[@]}" "${dumpDbArgs[@]}" "$@" "$db") > >(compress)
# there would be an easy one
dataToStore=$(dump "$@" "$db")
rc=$?
# but this means dump is stored in memory before saving... not the best deal as mysql already needs a lot of ram to run a dumpstorefunction is still called!So seems I'm missing something.Thanks for helping me out
|
bash intercepting error and stop in pipeline
|
Apache Beam pipeline execution is deferred--a DAG of operations to execute is built up and nothing actually happens until yourunyour pipeline. (In Beam Python, this is typically implicitly invoked at the end of awith beam.Pipeline(...)block.). PCollections don't actually contain data, just instructions for how the data should be computed.In particular, this means that when you writetagged = lines | beam.ParDo(filters()).with_outputs(...)tagged doesn't actually contain any data, rather it contains references to the PCollections that will be produced (and further processing steps can be added to them). The data inlineshasnot actually been computed or read yetso you can't (during pipeline construction) figure out what the set of outputs is.It's not clear what your end goal is from the question, but if you're trying to partition outputs, you may want to look intodynamic destinations.
|
I have this code which tags outputs based on some data of the input file:class filters(beam.DoFn):
def process(self, element):
data = json.loads(element)
yield TaggedOutput(data['EventName'],element)I need help with the next step of writting the resulting tagged outputs:tagged = lines | beam.ParDo(filters()).with_outputs('How can I dinamiclly acces this tags?')So as you can see when I do '.with_outputs()' I dont know how many and what names are the taggs going to be so I can´t predict things like:tag1 = tagged.tag1Thank you for your helpUPDATE: this wont work cause with.outputs() is emptytagged_data= lines | 'tagged data by key' >>
beam.ParDo(filters()).with_outputs()
for tag in tagged_data:
print('something')
output: WARNING:apache_beam.options.pipeline_options:Discarding unparseable argsbut this will worktagged_data= lines | 'tagged data by key' >>
beam.ParDo(filters()).with_outputs('tag1','tag2')
for tag in tagged_data:
print('something')
output:
something
something
|
How to write each tagged output to different file in Apache beam
|
After research hundred pages but cannot find anything, then I must try one by one by myself, finally I found the pattern for the artifacts of all the files:artifacts:
- '**'
|
Basically, I don't want pipeline step clone code on next steps, only first step will clone the source code a time. Another reason is if step clone the source code (and doesn't use the source code from previous) the built code will be lost.I known that the bitbucket pipeline has the artifacts feature but seems it only store some parts of the source code.The flow is:Step 1: Clone source code.
Step 2: Run in parallel two steps, one install node modules at root folder, one install node module and build js, css at app folder.
Step 3: Will deploy the built source code from step 2.Here is my bitbucket-pipelines.ymlimage: node:11.15.0
pipelines:
default:
- step:
name: Build and Test
script:
- echo "Cloning..."
artifacts:
- ./**
- parallel:
- step:
name: Install build
clone:
enabled: false
caches:
- build
script:
- npm install
- step:
name: Install app
clone:
enabled: false
caches:
- app
script:
- cd app
- npm install
- npm run lint
- npm run build
- step:
name: Deploy
clone:
enabled: false
caches:
- build
script:
- node ./bin/deploy
definitions:
caches:
app: ./app/node_modules
build: ./node_modules
|
Bitbucket pipeline reuse source code from previous step
|
There is a stage 0 proposal for EcmaScript for this pipeline operator:https://github.com/tc39/proposal-pipeline-operatorTypescript usually adopts EcmaScript features when they reach stage 3
|
Many programming languages have some means of chaining functions left-to-right, instead of inside-to-outside.For example, in Bash:produce some data | transform1 arg1 | transform2 arg2 | transform3 arg3in F#:produce some data |> transform1 arg1 |> transform2 arg2 |> transform3 arg3in Kotlin:produce(some, data)
.let { transform1(it, arg1) }
.let { transform2(it, arg2) }
.let { transform3(it, arg3) }and so on.Does Typescript allow doing anything like this? Does it have (or allow writing) a "forward pipe" operator like|>? Does it have (or allow adding to all objects at once) a.let {...}extension function?
|
Is there a way to chain functions left-to-right in Typescript?
|
I have a list ofPSCustomObjects…I want to check for the uniqueness of only one property, filter out all non-unique ones and
not just leave one of them, and allow the other properties to pass
through unchanged…Let's rewrite above requirements in terms of PowerShell cmdlets:$A = @'
A,B
1,2
2,2
4,1
5,4
'@ | ConvertFrom-Csv -Delimiter ',' # a list of PSCustomObjects
$A |
Group-Object -Property B | # check for the uniqueness,
Where-Object Count -eq 1 | # filter out all non-unique ones,
Select-Object -ExpandProperty Group # and pass through the rest unchangedOutput:62219608.ps1A B
- -
4 1
5 4
|
UsingSelect-Object, you can select the first of multiple occurences of an object using-Unique. You can also select only some properties to be retained. But what if I want to check for the uniqueness of only one property, filter out all non-unique ones and not just leave one of them, and allow the other properties to pass through unchanged? Specifically, I have a list ofPSCustomObjects where one property is a base name, and another is a shortened name generated automatically. I want to filter out elements where the short name occurs multiple times, but the base names of all items are all different. It seemsSelect-Objectisn't a match for this task, but what is?EDIT: To clarify - this should be the result:> $A = [PSCustomObject]@{A = 1; B = 2}, [PSCustomObject]@{A = 2; B = 2}, [PSCustomObject]@{A = 4; B = 1}
> $A
A B
- -
1 2
2 2
4 1
> $A | Mystery-Cmdlet "B"
A B
- -
4 1
|
Select multiple properties where one of the properties must be unique in a pipeline
|
I believe you are missing the underscore for$_variable:"ivan" | ForEach-Object -Process { Get-ADComputer -Filter * -properties description | Where-Object -Property description -eq "something-$_"}this one is working ...
|
I am trying to send a user name (SamAccountName) down the PowerShell Pipeline to find a computer based on the Description property in Active Directory:The Description property is always "something-UserName"I know I don't need to send the variable down the pipeline and can simply express it in the filter but I have s specific use case where I need to do this.This is what I have tried:"bloggsJ" | %{Get-ADComputer -server domain.com -Filter * -Properties Description | ?{$_.Description -eq "something-$_"}} | select NameThis produces nothing even though there is a computer with a description property of "Something-bloggsJ" on that domain.Any advice please.
|
PowerShell - Use a string in a foreach-object against the Description AD property of a computer object
|
man xargs:-r, --no-run-if-empty
If the standard input does not contain any nonblanks,
do not run the command. Normally, the command is run
once even if there is no input. This option is a GNU
extension.
|
ps aux | grep node | grep -v grep | awk '{print $2}' | xargs kill -9I use the above command to kill all Node.js processes (Ubuntu) abut if there is no node process running it will show an error (stderr). Is it possible to use if statement in a pipeline to avoid havingxargsto receive nothing?Something like:ps aux | grep node | grep -v grep | awk '{print $2}' | if [ $pip ] ; then
xargs kill -9 $pip
fi
|
Is it possible to put if statement in a pipeline?
|
Name is a property. There's so many ways:get-childitem c:\users\*test*
get-childitem c:\users *test* # filter
get-childitem c:\users | where name -like *test*
get-childitem c:\users | where name -match test
(get-childitem c:\users).name -like '*test*' # doesn't stream as well
|
I just wanted to filter the names of the information from Get-ChildItem`.$folders = Get-ChildItem C:\Users
$folders.Name -like '*test*'What's the best way to do that instead of writing these two lines?What is the term for the.Namecalled?
|
What's the best way to pipeline this and filter?
|
The following code snippet outputs fully qualified names of all.pdffiles in subfoldersone levelunder current folder that do not have an underscore in theirfull path. (Output file namescouldcontain an underscore).Get-ChildItem -Path . |
Where-Object { $_.PsIsContainer -and $_.Fullname -notmatch '_' } |
Get-ChildItem -Filter *.pdf |
ForEach-Object {
<# do anything with every file object instead of just outputting its FullName <##>
$_.FullName
}You need to use the-Filterkeyword in the secondgciwith respect to its allowed position2(note that position1is dedicated for-Pathparameter).For further explanation and for potential improvements/optimalisations, readGet-ChildItemas well asGet-ChildItemfor FileSystem.
|
I use the following pipeline to select specific folders:gci -path * | ? { $_.PsIsContainer -and $_.Fullname -notmatch '_' }The above gives me folders that do not have an underscore in their name, which is what I want, and all is good so far.But when I pipe the result to anotherGet-ChildItem, I get aBindingExceptionerror:gci -path * | ? { $_.PsIsContainer -and $_.Fullname -notmatch '_' } | gci *.pdfgci : The input object cannot be bound to any parameters for the command either because the command does not take
pipeline input or the input and its properties do not match any of the parameters that take pipeline input.+ CategoryInfo : InvalidArgument: (Book Folder B:PSObject) [Get-ChildItem], ParameterBindingException+ FullyQualifiedErrorId : InputObjectNotBound,Microsoft.PowerShell.Commands.GetChildItemCommandHow can I process the files within each folder output from the above pipeline. For example, if a file has apdfextension, I would like to invoke theMove-Itemcommand for it.
|
PowerShell - How to process files in folders selected by regex pipeline
|
This would correspond to:spawn(pipeline(`home/bin/julia /home/elite_script.jl`, stdout="/home/beckman/elite_log.log", append=true))
|
How would I translate the following into Julia's run/readstring/pipeline() framework?home/bin/julia /home/elite_script.jl &>> /home/beckman/elite_log.log &
|
Julia Running External Programs
|
Your pipe is the wrong way around, to passnto the script you need to writeecho "n" | ./script argument1 argument2Another example:echo "abcd" | { read -p "Change Parameters?" b; echo $b; }Output:abcdIn the second example the{ ... }part is your script,echo "abcd"is piped to the script,readgets the "abcd" (the promptChange parameters?is not shown), saves it in the variable$b, then$bis echoed.
|
I want to pass an argument to a read command within this script via a pipe or some other method.My command within the script isread -p "Change Parameters?
while [ $REPLY != "n" ]
do
if [ $REPLY == "a" ]
then
..
..
if [ $REPLY =="j" ]
..
doneThe read command appears on line 48 of my script if thats of any useI've tried./script argument1 argument2 | bash /dev/stdin "n"which results inChange Parameters? /dev/stdin: line 1: Default: command not found
/dev/stdin: line 2: a: command not found
/dev/stdin: line 3: b: command not found
/dev/stdin: line 4: c: command not found
/dev/stdin: line 5: d: command not found
/dev/stdin: line 6: e: command not found
/dev/stdin: line 7: f: command not found
/dev/stdin: line 8: g: command not found
/dev/stdin: line 9: h: command not found
/dev/stdin: line 10: i: command not found
/dev/stdin: line 11: j: command not found
/dev/stdin: line 12: n: command not foundAnd then stopsI just want to pass the letter n to this command and for the script to continue till the end.
|
How do I pass arguments to a bash script using a pipe?
|
As @Luaan said in the comments,typewill only accept the filename as argument and not via its input channel. So piping won't do the trick in your case. You'll have to use another way to give the result of thewherecommand as an argument. Fortunately thefor /fcan help you process outputs of other commands. To print the file corresponding to the output of thewherecommand you'll have to use this on the command line:FOR /F "delims=" %G IN ('where myscript') DO type "%G"In a batch-file you'll have to use@echo off
FOR /F "delims=" %%G IN ('where myscript') DO type "%%G"
|
I'm trying to print the contents of a batch file that exists in my path.I can find the file with 'where':> where myscript
C:\scripts\myscript.batI can display the contents of the file with 'type':> type C:\scripts\myscript.bat
echo This is my script. There are many like it, but this one is mine.However, when I want to be lazy and use a single command:> where myscript | type
The syntax of the command is incorrect.Based on some tests I did, it seems 'where' output can't be piped out and 'type' input can't be piped in.Can anyone explain why this doesn't work in this way?P.S. I was able to do this in Powershell:Get-Command myscript | Get-Content.
|
Why can't I pipe 'where' output to 'type' in batch?
|
Why do you check if the stream is ready? All you want to do is wait for data to be available and read it when it is available; if you just use an ordinary potentially-blocking read, that is what will happen.If you exit the read loop as soon as no data is immediately available, which is what your code excerpt seems to be doing, then it is likely that your program will terminate early, at least some of the time. If you just read (possibly blocking until data is available) until you see an EOF, then things should Just Work. (That is what grep and other "professionally written programs" do.)
|
I have usedduffto create a report of duplicate files in my file system:duff ~/Photos > ~/Photos/duplicates.txtI have written some groovy scripts to transform this report into an HTML page where I can view the duplicate photos in my browser:cat duplicates.txt | filter_non_existent.groovy | duff_to_json.groovy | json_to_html.groovy > duplicates.htmlAbout 50% of the time, it works beautifully without any problems.Other times some of my scripts don't start executing, or run very slowly. Why is this, and what can I do to prevent it? (and why don't "professionally written" command line programs like grep suffer from this problem?)More informationI delay the start of each script until it receives std input (while
no data, sleep for 2.5 seconds). The starvation was more frequent
when the sleep interval was shorterThe more pipes I connect together, the more frequently it happens (if I just execute one of my scripts aftercat, I never get the problem)Here's the template each of my scripts are using (I use the groovy runtime but stick to plain java syntax):public static void main(String[] args) throws IOException {BufferedReader br = new BufferedReader(new InputStreamReader(System.in));
while (!br.ready()) {
Thread.sleep(2500L);
}
while (br.ready()) {
String inputLine = br.readLine();
// Do stuff
}}
|
Bash Pipelined Shell scripts I wrote are Starved/Deadlocked/Performing poorly
|
You can use ateeand then write it to a file which you use later:tempfile="xyz"
tr ' ' '\n' < "${!#}" | grep '.' | tee > "$tempfile" | sort | uniq -c ...
nol=$(wc -l "$tempfile")Or you can use it the other way around:nol=$(tr ' ' '\n' < "${!#}" | grep '.' \
| tee >(sort | uniq -c ... > /dev/tty) | wc -l
|
I have a big txt file which I want to edit in pipeline. But on same place in pipeline I want to set number of lines in variable $nol. I just want to see sintax how could I set variable in pipeline like:cat ${!#} | tr ' ' '\n'| grep . ; $nol=wc -l | sort | uniq -c ...That after second pipe is very wrong, but how can I do it in bash?One of solutions is:nol=$(cat ${!#} | tr ' ' '\n'| grep . | wc -l)
pipeline all from the start againbut I don't want to do script the same thing twice, bec I have more pipes then here.I musn't use awk or sed...
|
Is it possible to set variable in pipeline?
|
Escape the expression.$ ./test | grep "first\|second"
first hello world.
secondAlso bear in mind that the shebang is#!/usr/bin/env python, not just#/usr/bin/env python.
|
I want to extract certain information from the output of a program. But my method does not work. I write a rather simple script.#!/usr/bin/env python
print "first hello world."
print "second"After making the script executable, I type./test | grep "first|second". I expect it to show the two sentences. But it does not show anything. Why?
|
grep in pipeline: why it does not work
|
The Powershell pipeline automatically "unrolls" arrays and collections one level, and passes them on to the next cmdlet or function one at a time, so when you send that array through the pipeline your function is dealing with one process object at a time.When you use the parameter, you're sending the entire array at once, and your function is dealing with an array object, not a process object, so you get different results.Try this and see if it doesn't make a difference in your output:Process {
$InputObject | foreach {Write-Host $_ }
}
|
i am getting confused by the way powershell deals with function parameters.so i have this sample ps module i made just for testing purposes, based upon my real script:function test {
[CmdletBinding(SupportsShouldProcess=$true, ConfirmImpact='Medium')]
param(
[Parameter(Mandatory=$true, Position=0, ValueFromPipeline=$true, ValueFromPipelineByPropertyName=$true)] [System.Management.Automation.PSObject]$InputObject
)
Begin {
}
Process {
Write-Host $InputObject
}
End {
}
}
Export-Modulemember -function testsave it as test.psm1 & import, and do 2 tests:First Test:(get-process | select -first 5) | testwill return with this:System.Diagnostics.Process (7zFM)
System.Diagnostics.Process (7zFM)
System.Diagnostics.Process (AcroRd32)
System.Diagnostics.Process (AcroRd32)
System.Diagnostics.Process (AESTSr64)Second Test:test -InputObject (get-process | select -first 5)will instead return with this:System.Diagnostics.Process (7zFM) System.Diagnostics.Process (7zFM) System.Diagn
ostics.Process (AcroRd32) System.Diagnostics.Process (AcroRd32) System.Diagnosti
cs.Process (AESTSr64)same happens when i use a variable to store and forward the data.there seem to be a difference in the way powershell treats parameters, the data i get back when using the -InputObject parameter seem to loose its array-ish format in some way...why does it do that? and is there a way to change this behaviour?
|
Powershell function: weird difference between pipeline and common parameter value
|
Take a look atCamelProxy. It allows you to send to a Camel endpoint.OrderService service = new ProxyBuilder(context)
.endpoint("direct:order")
.build(OrderService.class);OrderService is an interface which defines the methods you want to use to send:public interface OrderService {
public String send(SomeBean message);
}Sample route:from("direct:order").to("bean:someProcessor");Send a message to the route:String reply = service.send(new SomeBean());Here is a simple working example
|
I've just discovered Camel and it seems to be exactly what I need.I have several building blocks which know how to process specific input data, so I want to create a very simple GUI for users to select building blocks, just chaining them one after another (sort of like Fuse IDE, but not that fancy. Just a linear list of components is enough for me).I can't findexamples how to start a context feed simple POJOs in it one by one, waiting every time until the previous input message reaches the end of it's route, and then get another POJO on the opposite end.Or write it to a database / serialize to a file.Googling only brings up some examples with Spring, ActiveMQ, etc. I just need the simplest scenario but figure out which URIs to use etc.PS: I'm able to run this simple example (the only dependencies are camel-core 2.13 and slf4j 1.7.7)CamelContext context = new DefaultCamelContext();
// add our route to the CamelContext
context.addRoutes(new RouteBuilder() {
@Override
public void configure() {
from("file:src/data?noop=true").
choice().
when(xpath("/person/city = 'London'")).to("file:target/messages/uk").
otherwise().to("file:target/messages/others");
}
});
// start the route and let it do its work
System.out.println("Starting camel no maven");
context.start();
Thread.sleep(3000);
System.out.println("Done camel no maven");
// stop the CamelContext
context.stop();
|
Apache Camel: creating simple POJO pipelines (put a POJO in and get a POJO out)
|
Using anassociative array:awk '{a[$1]+=$2}END{for (i in a){print i,a[i]}}' infileAlternative topreserve order:awk '!($1 in a){b[++cont]=$1}{a[$1]+=$2}END{for (c=1;c<=cont;c++){print b[c],a[b[c]]}}' infileAnother way wherearraysarenot needed:awk 'lip != $1 && lip != ""{print lip,sum;sum=0}
{sum+=$NF;lip=$1}
END{print lip,sum}' infileResult81.220.49.127 6654
81.226.10.238 328
81.227.128.93 84700
|
Below I have some raw data. My goal is to match 'column one' values and have the total number of bytes in a single line of output for each ip address.
For example output:81.220.49.127 6654
81.226.10.238 328
81.227.128.93 84700Raw Data:81.220.49.127 328
81.220.49.127 328
81.220.49.127 329
81.220.49.127 367
81.220.49.127 5302
81.226.10.238 328
81.227.128.93 84700Can anyone advise me on how to do this.
|
Unix Pipeling "AWK" - The summation whilst matching
|
You might be able to useSitecore.Context.Page.FilePath. It will be set to yourLayouton a Sitecore item (i.e. '/layouts/standard layout.aspx') while on a static page it'll be the path to your page.If your static pages are all in a different location from your Sitecore layouts it might be as easy as just matching part of theFilePath.
|
I'm working in the Sitecore pipeline on a processor. I need to determine if a request being sent is for a static.aspxpage that does not have a context item or if the page being requested does not exist.This will happen right after theItemResolverprocess fires so the database is set tomasterfor both an.aspxrunning through the pipeline and a request for a page that doesn't exist.I can't check if theContext.Item == nullbecause the static page has no item associated with it and I'd like to keep it that way since the content on said page will not be changing.Let me know if you have any ideas to differentiate between these!
|
Determining if a page is a static .aspx page or does not exist in the sitecore pipeline
|
The shell will spawn both processes at the same time and connect the output of the first to the input of the second. If you have two long-running processes then you'll see both in the process table. There's no intermediate step, no storage of data from the first program. It's apipe, not abucket.Note - there may be some buffering involved.
|
As i know, if we do for example "./program | grep someoutput", it greps after program process finished. Do you know, how do it by income?
|
How to transfer data in unix pipeline in real time?
|
$msg = [string](git status) | where { $_.Contains("nothing to commit") }
|
In my PowerShell script, I'd like to use the output of a tool likegit.For example, the command linegit statusreturns# On branch master
nothing to commit (working directory clean)Now I tried to use this output in the following pipeline command:git status | $_.Contains("nothing to commit")But I get the errorExpressions are only allowed as the first element of a pipeline.What am I doing wrong?
|
Use console output of command-line tool in Powershell pipeline
|
You can use anHTTP Module, however, to use it you will need to map all requests to IIS which can be done using awild card map.. This will have a performance impact because you're going to be forcing all requests through the .net runtime.You could also write your own ISAPI filter, but I believe you'll have to use C++.EditASP.Net has a default handler if your doign the wild card mapping you need, make sure you still have this in your web.config in your windows/microsoft.net/framework..../config/ folder:<httpHandlers>
....
<add path="*" verb="GET,HEAD,POST" type="System.Web.DefaultHttpHandler" validate="True"/>
</httpHandlers>You might have also removed the handler in your web's config file. Lastly you could try and add an explicit mapping for the pdf file.
|
I'd like to run a discrete chunk of .Net code for each and every request that comes through a certain Web site in IIS. This is completely isolated code -- it doesn't affect anything before or after it -- it logs some information from the request, then ends.Note that this is not something I can put in Application_OnRequestBegin or some other ASP.Net file because I need this to execute for non .Net files (PDFs, images, etc.). It needs to execute for requests that would normally not hit the .Net framework.Is an HTTP Module what I'm looking for? I've RTFM'ed quite a bit, but it seems there a number of different ways to manipulate the pipeline, and I'm not quite sure which one I should be using.
|
What part of the ASP.Net framework do you use to execute arbitrary code on every inbound request to IIS?
|
You need to specify the environment you want to use within the job.You can specify an environment for each job in your workflow. To do so, add ajobs.<job_id>.environmentkey followed by the name of the environment.https://docs.github.com/en/actions/deployment/targeting-different-environments/using-environments-for-deploymentOnly then shall the environment variables defined for that environment be available.I have fixed your workflow file.name: 'Github Workflow'
run-name: 'Terraform Plan-Apply'
on:
workflow_dispatch:
inputs:
Configuration:
description: 'Deployment Group'
required: True
type: choice
options:
- Applications
- KeyVault
- Synapse
- Databricks
Environment:
description: 'Environment'
required: True
type: choice
options:
- Dev
- QA
- Prod
jobs:
deployment:
runs-on: ${{ vars.Runner }}
environment: ${{ inputs.Environment }}
|
I am setting up a Terraform pipeline in Github workflow.Below is the initial set of code:name: 'Github Workflow'
run-name: 'Terraform Plan-Apply'
on:
workflow_dispatch:
inputs:
Configuration:
description: 'Deployment Group'
required: True
type: choice
options:
- Applications
- KeyVault
- Synapse
- Databricks
Environment:
description: 'Environment'
required: True
type: choice
options:
- Dev
- QA
- Prod
GithubRunner:
description: 'Environment'
required: True
default: ${{ env.Runner }}Under Settings in Github, I have created three environments Dev, QA and Prod. Each environment has a variable named 'Runner' which holds the runner information. I tried the above code and it is not fetching the details.
Output of the above code:
Its showing as ${{ env.Runner }} in the Workflow.Is there a way to read the value from environment variable? like, fetch the value according to the environment selected?
|
Read value from Environment variables in Github inputs
|
Short answer is-Filterdoes not take pipeline input hence nodelay-bind script blockis possible. And there is a good reason for this, the filter used needs to be passed as is to theFilter Provider,cannot be evaluated beforehand.As a minimal example, try this function which's-Filterparameter takesValueFromPipelineand then remove theValueFromPipeline.function Get-ADThing {
param(
[Parameter(ValueFromPipeline)]
[string] $Filter
)
process {
$Filter
}
}
'foo', 'bar', 'baz' | Get-ADThing -Filter { "name -eq $_" }
|
I have been troubleshooting code today and I discovered that this code does not work. I am using a PowerShell pipeline to send an array of distinguished names to search Active Directory for users with a manager.$managers | Get-ADUser -Filter {manager -eq $_}However, if I expand the command to this example with aforeach-objectloop and an explicit variable, the code does work.$managers | %{$manager = $_; Get-ADUser -Filter {manager -eq $manager}}What part of the source code forces users to write out expressions like this?I'm not familiar with the source code forGet-ADUser, but in working with native PSCustomObjects, using the pipeline works as expected in the first example.
|
Why does the Get-ADUser -Filter command not accept $_ as input?
|
You can create awebhookin teams and usingcurlsend the data you need in the script section.script:
- curl -H "Content-Type:application/json" -d '{"test": "test"}' $WEBHOOK_URL
|
I'm try to create a notification when a specific pipeline step is execute.In particular I have a GitLab pipeline that execute the deploy of the master branch in test environment like the followingdeploy:
stage: deploy
script:
# script for deploy
only:
- masterI want to publish to teams channel only when this step is triggered.Exist a way to do this? I already know the standard integration between GitLab and Teams but this publish a message every time the pipeline run and not only for a specific step.
|
GitLab CI teams notification on specific step
|
FunctionTransformercreates a scikit-learn compatible transformer using one custom function. A transformer must returns the result of this function so that it can be used in the latter step. The problem with your code is thatweekfuncis basically taking in a DataFrame and returning nothing.The example below is using the functionweekfuncinside a pipeline:As follow:from sklearn.compose import ColumnTransformer
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import OrdinalEncoder, FunctionTransformer
from sklearn.linear_model import LinearRegression
from datetime import datetime
import pandas as pd
X_train = pd.DataFrame(
{
"date": [datetime.today()],
"country": ["ch"],
"store": ["Gamestop"],
"product": ["Xbox"],
}
)
y_train = pd.Series([1])
def weekfunc(df):
return (df["date"].dt.weekday >= 5).to_frame()
get_weekend = FunctionTransformer(weekfunc)
col_trans = ColumnTransformer(
[
("weekend transform", get_weekend, ["date"]),
("label encoding", OrdinalEncoder(), ["country", "store", "product"]),
]
)
pipe = Pipeline([("preprocessing", col_trans), ("regression", LinearRegression())])
pipe.fit(X_train, y_train)In addition, scikit-learnPipelinenot only looks cleaner. They also help to create model that are containing both preprocessing and modeling. This is very helpful to deploy model in production.
|
I'm learning to use pipelines as they look more clean. So, I'm working on the tabular playground competition on Kaggle.I'm trying follow a pretty simple pipeline where I use aFunctionTransformerto add a new column to the dataframe, doOrdinal Encoding, and finally fit the data on aLinearRegressionmodel.Here is the code:def weekfunc(df):
print(df)
df = pd.to_datetime(df)
df['weekend'] = df.dt.weekday
df['weekend'].replace(range(5), 0, inplace = True)
df['weekend'].replace([5,6], 1, inplace = True)
get_weekend = FunctionTransformer(weekfunc)
col_trans = ColumnTransformer([
('weekend transform', get_weekend,['date']),
('label encoding', OrdinalEncoder(), ['country', 'store', 'product'])
])
pipe = Pipeline([
('label endoer', col_trans),
('regression', LinearRegression())
])
pipe.fit(X_train,y_train)But the code breaks on the first step (FunctionTransformer) and gives me the following error:to assemble mappings requires at least that [year, month, day] be specified:
[day,month,year] is missingwhich is weird since I can print inside the function being executed which shows it is indatetimeformat. Evenget_weekend.transform(X_train['date'])works as intended. But it doesn't seem to work when all the steps are joined.
|
Sklearn pipeline breaks when using FunctionTransformer
|
I don't think that theepochsparameter is defined forMLPClassifier; you should use themax_iterparameter instead.Then if you want to specify hyperparameters within aPipeline, you can do as follows:from sklearn.neural_network import MLPClassifier
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import make_pipeline
from sklearn.impute import SimpleImputer
from sklearn.datasets import make_classification
X, y = make_classification()
model = make_pipeline(SimpleImputer(), StandardScaler(), MLPClassifier())
params = {
'mlpclassifier__max_iter' : 10,
'mlpclassifier__batch_size' : 20
}
model.set_params(**params)
model.fit(X, y)I would suggest using this notation as you can easily reuse it to perform aGridSearchCV.
|
Closed.This question needsdebugging details. It is not currently accepting answers.Edit the question to includedesired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.Closed2 years ago.Improve this questionI have a neural network model which I wish to fit to my training data. When I compile the below line of codehistory = pipeline.fit(inputs[train], targets[train], epochs=epochs, batchsize=batchsize)I receive the following error message:Pipeline.fit does not accept the epochs parameter. You can pass parameters to specific
steps of your pipeline using the stepname__parameter format, e.g.
`Pipeline.fit(X, y, logisticregression__sample_weight=sample_weight)`How to resolve this issue?
|
Pipeline issues in sklearn [closed]
|
If I understand the question correctly, you do not want to have thesync_s3:nonprodjob run if thesync_s3:prodis run. (?)To achieve this, on thesync_s3:nonprodjob you should be able to copy the same rule fromsync_s3:prodtogether withwhen: never:stages:
- sync:nonprod
- sync:prod
.sync_s3:
image:
name: image
entrypoint: [""]
script:
- aws configure set region eu-west-1
- aws s3 sync ${FOLDER_ENV} s3://img-${AWS_ENV} --delete
sync_s3:prod:
stage: sync:prod
rules:
- if: $CI_COMMIT_TAG
changes:
- prod/*
extends: .sync_s3
variables:
AWS_ENV: prod
FOLDER_ENV: prod/
tags:
- gaming_prod
sync_s3:nonprod:
stage: sync:nonprod
rules:
- if: $CI_COMMIT_TAG
changes:
- prod/*
when: never
- changes:
- pp2/*
extends: .sync_s3
variables:
AWS_ENV: nonprod
FOLDER_ENV: pp2/
tags:
- gaming_nonprod
|
need help from GitLab gurus. I have a following pipeline below.
I expect "sync_s3:prod" job will run only when i will push new git tag. But gitlab trigger both
jobs. Why its behaving like this ? I create $git_commit_tag rule only for one job. Any ideas?stages:
- sync:nonprod
- sync:prod
.sync_s3:
image:
name: image
entrypoint: [""]
script:
- aws configure set region eu-west-1
- aws s3 sync ${FOLDER_ENV} s3://img-${AWS_ENV} --delete
sync_s3:prod:
stage: sync:prod
rules:
- if: $CI_COMMIT_TAG
changes:
- prod/*
extends: .sync_s3
variables:
AWS_ENV: prod
FOLDER_ENV: prod/
tags:
- gaming_prod
sync_s3:nonprod:
stage: sync:nonprod
rules:
- changes:
- pp2/*
extends: .sync_s3
variables:
AWS_ENV: nonprod
FOLDER_ENV: pp2/
tags:
- gaming_nonprod
|
GITLAB CI pipeline, run job only with git tag
|
I used and developed both of them.First of all, need to analyze the tools;Camunda is a bpm(Business Process Management) tool. It proceeds with workflow and managing decisions.Nifi is a data flow manager. It consumes the data from the "x" platform and proceed and then publish or send another platform or tool. You can solve all "cross-cutting" data problems with it.Your question;
if "Pipeline task" means CI/CD operation, you don't need to use these tools. But if "Pipeline task" means that pushing the data and managing the process, your solution is changing by your plan.Processing the data without any user confirmation and consuming automatically depending on some decision that checks by tool, you can useApache Nifi.Processing the data and need some confirmation by the user to resume the task (Also use some logic with the script to confirmation), you can useCamunda.
|
Closed.This question is seeking recommendations for software libraries, tutorials, tools, books, or other off-site resources. It does not meetStack Overflow guidelines. It is not currently accepting answers.We don’t allow questions seeking recommendations for software libraries, tutorials, tools, books, or other off-site resources. You can edit the question so it can be answered with facts and citations.Closed2 years ago.Improve this questionWhen I'm checking Apache nifi and Camunda both are suitable for pipeline tasks. I'm confused. Which one is best for pipeline tasks.
|
Which one is best apache nifi or Camunda for pipeline tasks [closed]
|
An efficient way could be to incorporate the folder structure explicitly inside your Snakefile. For example, you could use the content of a parameter, e.g.example_path, inside the Snakefile and then pass it viaconfig:snakemake --config example_path_in=/path/to/input example_path_out=/path/to/output
|
So I have a pipeline written in shell which loops over three folders and then an inside loop which loops over files inside the folder.For next step, I have a snakemake file, which takes an input folder and output folder. For trial run I gave the folder path inside the snakemake file.So I was wondering is there any way I can give input and output folder path explicitly.For e.g.snakemake --cores 30 -s trial.snakemake /path/to/input /path/to/outputSince I want to change the input and output according to the main loop.I triedimport sysand usingsys.argv[1]andsys.argv[2]inside the snakemake file but its not working.Below is the snippet of my pipeline, it takes three folder for now, ABC_Samples, DEF_Samples, XYZ_Samplesfor folder in /path/to/*_Samples
do
folderName=$(basename $folder _Samples)
mkdir -p /path/to/output/$fodlerName
for files in $folder/*.gz
do
/
do something
/
done
snakemake --cores 30 -s trial.snakemake /path/to/output/$fodlerName /path/to/output2/
doneBut the above doesn't work. So is there any way I can do this. I am really new to snakemake.Thank you in advance.
|
How to give explicitly input and output to Snakemake file?
|
Pipelineis used to assemble several steps such as preprocessing, transformations, and modeling.StratifiedKFoldis used to split your dataset to assess the performance of your model. It is not meant to be used as a part of thePipelineas you do not want to perform it on new data.Therefore it is normal to perform it out of the pipeline's structure.
|
I have the following piece of code:from sklearn import model_selection
from sklearn.ensemble import RandomForestClassifier
import pandas as pd
from sklearn.pipeline import Pipeline
...
x_train, x_test, y_train, y_test= model_selection.train_test_split(dataframe[features_],dataframe[labels], test_size=0.30,random_state=42, shuffle=True)
classifier = RandomForestClassifier(n_estimators=11)
pipe = Pipeline([('feats', feature), ('clf', classifier)])
pipe.fit(x_train, y_train)
predicts = pipe.predict(x_test)Instead of train test split, I want to use k-fold cross validation to train my model. However, I do not know how can make it by using pipeline structure. I came across this:https://scikit-learn.org/stable/modules/compose.htmlbut I could not fit to my code.I want to usefrom sklearn.model_selection import StratifiedKFoldif possible. I can use it without pipeline structure but I can not use it with pipeline.Update:
I tried this but it generates me error.x_train = dataframe[features_]
y_train = dataframe[labels]
skf = StratifiedKFold(n_splits=3, shuffle=True, random_state=42)
classifier = RandomForestClassifier(n_estimators=11)
#pipe = Pipeline([('feats', feature), ('clf', classifier)])
#pipe.fit(x_train, y_train)
#predicts = pipe.predict(x_test)
predicts = cross_val_predict(classifier, x_train , y_train , cv=skf)
|
From train test split to cross validation in sklearn using pipeline
|
GitLab has a super robust API. It's probably the way to go here.TermsI do want to standardize on some terms so I can make sure my recommendation is making sense. You say that yourstagesare running in parallel- but it isjobsthat run in parallel within agivenstage. For my response, I'm going to assume that you meant that you've asingle stageon the 'internal repo' containing 3 jobs.SetupCreate a secondstagein your 'internal repository' containing a single job.This single job in the second stage will work as a synchronizer since the second stage will not begin until all jobs within the first stage complete.This job should have a single activity, which is to call to your "external pipeline" using the GitLab jobs API. You'd configure a trigger to PLAY the job which was set to only be manual.https://docs.gitlab.com/ee/ci/triggers/Configure the "external jobs" to be set tomanual: true. This will prevent them from starting until they've been given the go-aheadExamplestages:
- test
- remote_trigger
Linter:
script:
- echo "I linted lol!"
stage: test
Security Check:
script:
- echo "I so secure!"
stage: test
Start Terraform:
script:
- curl --request POST --form "token=$CI_JOB_TOKEN" --form ref=master "https://gitlab.example.com/api/v4/projects/9/trigger/pipeline"
stage: remote_triggerThis would create 3 jobs over 2 stages - Once all parallel jobs (security check & linter) in the first stage complete, the Terraform step can begin.
|
I have defined 3 stages in gitlab-ci.yml. When there is a new commit, pipeline runs and these 3 stages run in parallel, which is expected and needed. (These stages run pre-requisite steps like security checks on code and other linting functions). I also have Scalr (another provider) inject external stages into the same pipeline (these run terraform policy checks and plan and apply).However, the problem is that these external stages kick off in parallel of the internal stages mentioned above. I would like gitlab to pause any execution external stages until AFTER the internal (pre-req) stages have finished.In case you are wondering, running the terraform plan and apply as gitlab internal stages is not an option.Anyway to accomplish this?
|
How to make external stages in gitlab pipeline "wait" until all the gitlab internal stages are done?
|
The error is telling you that one of your categorical features has a new category in it. When you trained theOneHotEncoder, it saved all the unique values in those columns and makes a dummy column for each of those; but it didn't see9in one of those columns in the training (or testing) data, but in the submission data a9is present.You can sethandle_unknown='ignore'in the encoder to silently ignore unseen levels, encoding them as all-zeros.handle_unknown :{‘error’, ‘ignore’}, default=’error’Whether to raise an error or ignore if an unknown categorical feature is present during transform (default is to raise). When this parameter is set to ‘ignore’ and an unknown category is encountered during transform, the resulting one-hot encoded columns for this feature will be all zeros. In the inverse transform, an unknown category will be denoted as None.
|
I'm new to machine learning and programming in general, so please take it easy on me.I'm doing this Titanic thing on Kaggle and tried using make_column_transformer() and pipeline after using train_test_split(). My code is as below.preprocess = make_column_transformer(
(StandardScaler(), ['Age','Fare']),
(OneHotEncoder(), ['Pclass', 'SibSp','Parch', 'Family_size','Sex', 'Embarked', 'Initial', 'Fare_cat']))
model = make_pipeline(preprocess, LogisticRegression())
model.fit(X_train, y_train)
predictions = model.predict(X_test)And this works just fine. However, when I tried it on the test dataset for the submissiony_train_submit = train_data.Survived.values
X_train_submit = train_data[['Pclass','Sex','Age','SibSp','Parch','Fare','Embarked', 'Initial','Family_size', 'Fare_cat']]
X_test_submit = test_data[['Pclass','Sex','Age','SibSp','Parch','Fare','Embarked', 'Initial','Family_size', 'Fare_cat']]
model.fit(X_train_submit, y_train_submit)
predictions_submit = model.predict(X_test_submit)it gives this error on the predictions_submit line.ValueError: Found unknown categories [9] in column 2 during transformAfter some experiments, I figured that the error comes from OneHotEncoder(). The columns and data types are all exactly the same, I did the exact same thing to both DataFrames, so why doesn't it work with the test dataset and how should I apply OneHotEncoder in this case?
|
Pipeline with OneHotEncoder() works on train_test_split() but returns error on the real test data even though they're the same
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.